00:00:00.001 Started by upstream project "autotest-per-patch" build number 132365 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.212 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.293 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.304 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.314 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.314 > git config core.sparsecheckout # timeout=10 00:00:03.326 > git read-tree -mu HEAD # timeout=10 00:00:03.339 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.359 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.359 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.443 [Pipeline] Start of Pipeline 00:00:03.457 [Pipeline] library 00:00:03.459 Loading library shm_lib@master 00:00:03.459 Library shm_lib@master is cached. Copying from home. 00:00:03.475 [Pipeline] node 00:00:03.484 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:03.491 [Pipeline] { 00:00:03.501 [Pipeline] catchError 00:00:03.503 [Pipeline] { 00:00:03.512 [Pipeline] wrap 00:00:03.520 [Pipeline] { 00:00:03.526 [Pipeline] stage 00:00:03.528 [Pipeline] { (Prologue) 00:00:03.540 [Pipeline] echo 00:00:03.541 Node: VM-host-SM17 00:00:03.544 [Pipeline] cleanWs 00:00:03.552 [WS-CLEANUP] Deleting project workspace... 00:00:03.552 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.558 [WS-CLEANUP] done 00:00:03.734 [Pipeline] setCustomBuildProperty 00:00:03.795 [Pipeline] httpRequest 00:00:04.166 [Pipeline] echo 00:00:04.168 Sorcerer 10.211.164.20 is alive 00:00:04.175 [Pipeline] retry 00:00:04.177 [Pipeline] { 00:00:04.190 [Pipeline] httpRequest 00:00:04.195 HttpMethod: GET 00:00:04.195 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.196 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.206 Response Code: HTTP/1.1 200 OK 00:00:04.207 Success: Status code 200 is in the accepted range: 200,404 00:00:04.207 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.533 [Pipeline] } 00:00:05.555 [Pipeline] // retry 00:00:05.564 [Pipeline] sh 00:00:05.843 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.856 [Pipeline] httpRequest 00:00:06.718 [Pipeline] echo 00:00:06.719 Sorcerer 10.211.164.20 is alive 00:00:06.725 [Pipeline] retry 00:00:06.727 [Pipeline] { 00:00:06.739 [Pipeline] httpRequest 00:00:06.742 HttpMethod: GET 00:00:06.743 URL: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:06.743 Sending request to url: http://10.211.164.20/packages/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:00:06.745 Response Code: HTTP/1.1 200 OK 00:00:06.745 Success: Status code 200 is in the accepted range: 200,404 00:00:06.746 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:02:20.596 [Pipeline] } 00:02:20.614 [Pipeline] // retry 00:02:20.622 [Pipeline] sh 00:02:20.901 + tar --no-same-owner -xf spdk_6fc96a60fa896bf51b1b42f73524626c54d3caa6.tar.gz 00:02:24.202 [Pipeline] sh 00:02:24.483 + git -C spdk log --oneline -n5 00:02:24.483 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:02:24.483 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:24.483 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:02:24.483 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:24.483 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:02:24.502 [Pipeline] writeFile 00:02:24.517 [Pipeline] sh 00:02:24.799 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:24.812 [Pipeline] sh 00:02:25.092 + cat autorun-spdk.conf 00:02:25.092 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.092 SPDK_RUN_ASAN=1 00:02:25.092 SPDK_RUN_UBSAN=1 00:02:25.092 SPDK_TEST_RAID=1 00:02:25.092 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.099 RUN_NIGHTLY=0 00:02:25.101 [Pipeline] } 00:02:25.116 [Pipeline] // stage 00:02:25.132 [Pipeline] stage 00:02:25.134 [Pipeline] { (Run VM) 00:02:25.147 [Pipeline] sh 00:02:25.429 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:25.429 + echo 'Start stage prepare_nvme.sh' 00:02:25.429 Start stage prepare_nvme.sh 00:02:25.429 + [[ -n 1 ]] 00:02:25.429 + disk_prefix=ex1 00:02:25.429 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:02:25.429 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:02:25.429 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:02:25.429 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.429 ++ SPDK_RUN_ASAN=1 00:02:25.429 ++ SPDK_RUN_UBSAN=1 00:02:25.429 ++ SPDK_TEST_RAID=1 00:02:25.429 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.429 ++ RUN_NIGHTLY=0 00:02:25.429 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:02:25.429 + nvme_files=() 00:02:25.429 + declare -A nvme_files 00:02:25.429 + backend_dir=/var/lib/libvirt/images/backends 00:02:25.429 + nvme_files['nvme.img']=5G 00:02:25.429 + nvme_files['nvme-cmb.img']=5G 00:02:25.429 + nvme_files['nvme-multi0.img']=4G 00:02:25.429 + nvme_files['nvme-multi1.img']=4G 00:02:25.429 + nvme_files['nvme-multi2.img']=4G 00:02:25.429 + nvme_files['nvme-openstack.img']=8G 00:02:25.429 + nvme_files['nvme-zns.img']=5G 00:02:25.429 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:25.429 + (( SPDK_TEST_FTL == 1 )) 00:02:25.429 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:25.429 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:25.429 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:25.429 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:25.429 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:25.429 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:25.429 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:25.429 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:25.429 + for nvme in "${!nvme_files[@]}" 00:02:25.429 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:25.687 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:25.687 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:25.946 + echo 'End stage prepare_nvme.sh' 00:02:25.946 End stage prepare_nvme.sh 00:02:25.958 [Pipeline] sh 00:02:26.239 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:26.239 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:02:26.239 00:02:26.239 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:02:26.239 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:02:26.239 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:02:26.239 HELP=0 00:02:26.239 DRY_RUN=0 00:02:26.239 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:02:26.239 NVME_DISKS_TYPE=nvme,nvme, 00:02:26.239 NVME_AUTO_CREATE=0 00:02:26.239 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:02:26.239 NVME_CMB=,, 00:02:26.239 NVME_PMR=,, 00:02:26.239 NVME_ZNS=,, 00:02:26.239 NVME_MS=,, 00:02:26.239 NVME_FDP=,, 00:02:26.239 SPDK_VAGRANT_DISTRO=fedora39 00:02:26.239 SPDK_VAGRANT_VMCPU=10 00:02:26.239 SPDK_VAGRANT_VMRAM=12288 00:02:26.239 SPDK_VAGRANT_PROVIDER=libvirt 00:02:26.239 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:26.239 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:26.239 SPDK_OPENSTACK_NETWORK=0 00:02:26.239 VAGRANT_PACKAGE_BOX=0 00:02:26.239 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:26.239 FORCE_DISTRO=true 00:02:26.239 VAGRANT_BOX_VERSION= 00:02:26.239 EXTRA_VAGRANTFILES= 00:02:26.239 NIC_MODEL=e1000 00:02:26.239 00:02:26.239 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:02:26.239 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:02:29.538 Bringing machine 'default' up with 'libvirt' provider... 00:02:30.472 ==> default: Creating image (snapshot of base box volume). 00:02:30.730 ==> default: Creating domain with the following settings... 00:02:30.730 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732091761_7075a56509905c5ad3a3 00:02:30.730 ==> default: -- Domain type: kvm 00:02:30.730 ==> default: -- Cpus: 10 00:02:30.730 ==> default: -- Feature: acpi 00:02:30.730 ==> default: -- Feature: apic 00:02:30.730 ==> default: -- Feature: pae 00:02:30.730 ==> default: -- Memory: 12288M 00:02:30.730 ==> default: -- Memory Backing: hugepages: 00:02:30.730 ==> default: -- Management MAC: 00:02:30.730 ==> default: -- Loader: 00:02:30.730 ==> default: -- Nvram: 00:02:30.730 ==> default: -- Base box: spdk/fedora39 00:02:30.730 ==> default: -- Storage pool: default 00:02:30.730 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732091761_7075a56509905c5ad3a3.img (20G) 00:02:30.730 ==> default: -- Volume Cache: default 00:02:30.730 ==> default: -- Kernel: 00:02:30.730 ==> default: -- Initrd: 00:02:30.730 ==> default: -- Graphics Type: vnc 00:02:30.730 ==> default: -- Graphics Port: -1 00:02:30.730 ==> default: -- Graphics IP: 127.0.0.1 00:02:30.730 ==> default: -- Graphics Password: Not defined 00:02:30.730 ==> default: -- Video Type: cirrus 00:02:30.730 ==> default: -- Video VRAM: 9216 00:02:30.730 ==> default: -- Sound Type: 00:02:30.730 ==> default: -- Keymap: en-us 00:02:30.730 ==> default: -- TPM Path: 00:02:30.730 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:30.730 ==> default: -- Command line args: 00:02:30.730 ==> default: -> value=-device, 00:02:30.730 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:30.730 ==> default: -> value=-drive, 00:02:30.730 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:02:30.730 ==> default: -> value=-device, 00:02:30.730 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:30.730 ==> default: -> value=-device, 00:02:30.730 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:30.730 ==> default: -> value=-drive, 00:02:30.730 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:30.730 ==> default: -> value=-device, 00:02:30.730 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:30.730 ==> default: -> value=-drive, 00:02:30.730 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:30.730 ==> default: -> value=-device, 00:02:30.730 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:30.730 ==> default: -> value=-drive, 00:02:30.730 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:30.730 ==> default: -> value=-device, 00:02:30.730 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:30.730 ==> default: Creating shared folders metadata... 00:02:30.730 ==> default: Starting domain. 00:02:32.631 ==> default: Waiting for domain to get an IP address... 00:02:47.511 ==> default: Waiting for SSH to become available... 00:02:48.899 ==> default: Configuring and enabling network interfaces... 00:02:53.088 default: SSH address: 192.168.121.244:22 00:02:53.088 default: SSH username: vagrant 00:02:53.088 default: SSH auth method: private key 00:02:55.622 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:03.739 ==> default: Mounting SSHFS shared folder... 00:03:05.115 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:05.115 ==> default: Checking Mount.. 00:03:06.052 ==> default: Folder Successfully Mounted! 00:03:06.052 ==> default: Running provisioner: file... 00:03:06.990 default: ~/.gitconfig => .gitconfig 00:03:07.557 00:03:07.557 SUCCESS! 00:03:07.557 00:03:07.557 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:03:07.557 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:07.557 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:03:07.557 00:03:07.566 [Pipeline] } 00:03:07.581 [Pipeline] // stage 00:03:07.590 [Pipeline] dir 00:03:07.590 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:03:07.592 [Pipeline] { 00:03:07.604 [Pipeline] catchError 00:03:07.606 [Pipeline] { 00:03:07.619 [Pipeline] sh 00:03:07.899 + vagrant ssh-config --host vagrant 00:03:07.899 + sed -ne /^Host/,$p 00:03:07.899 + tee ssh_conf 00:03:12.122 Host vagrant 00:03:12.122 HostName 192.168.121.244 00:03:12.122 User vagrant 00:03:12.122 Port 22 00:03:12.122 UserKnownHostsFile /dev/null 00:03:12.122 StrictHostKeyChecking no 00:03:12.122 PasswordAuthentication no 00:03:12.122 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:12.122 IdentitiesOnly yes 00:03:12.122 LogLevel FATAL 00:03:12.122 ForwardAgent yes 00:03:12.122 ForwardX11 yes 00:03:12.122 00:03:12.137 [Pipeline] withEnv 00:03:12.139 [Pipeline] { 00:03:12.156 [Pipeline] sh 00:03:12.436 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:12.436 source /etc/os-release 00:03:12.436 [[ -e /image.version ]] && img=$(< /image.version) 00:03:12.436 # Minimal, systemd-like check. 00:03:12.436 if [[ -e /.dockerenv ]]; then 00:03:12.436 # Clear garbage from the node's name: 00:03:12.436 # agt-er_autotest_547-896 -> autotest_547-896 00:03:12.436 # $HOSTNAME is the actual container id 00:03:12.436 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:12.436 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:12.436 # We can assume this is a mount from a host where container is running, 00:03:12.436 # so fetch its hostname to easily identify the target swarm worker. 00:03:12.436 container="$(< /etc/hostname) ($agent)" 00:03:12.436 else 00:03:12.436 # Fallback 00:03:12.436 container=$agent 00:03:12.436 fi 00:03:12.436 fi 00:03:12.436 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:12.436 00:03:12.716 [Pipeline] } 00:03:12.733 [Pipeline] // withEnv 00:03:12.742 [Pipeline] setCustomBuildProperty 00:03:12.757 [Pipeline] stage 00:03:12.760 [Pipeline] { (Tests) 00:03:12.777 [Pipeline] sh 00:03:13.106 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:13.380 [Pipeline] sh 00:03:13.662 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:13.936 [Pipeline] timeout 00:03:13.936 Timeout set to expire in 1 hr 30 min 00:03:13.938 [Pipeline] { 00:03:13.954 [Pipeline] sh 00:03:14.235 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:14.803 HEAD is now at 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:03:14.816 [Pipeline] sh 00:03:15.096 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:15.370 [Pipeline] sh 00:03:15.651 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:15.927 [Pipeline] sh 00:03:16.207 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:16.466 ++ readlink -f spdk_repo 00:03:16.466 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:16.466 + [[ -n /home/vagrant/spdk_repo ]] 00:03:16.466 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:16.466 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:16.466 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:16.466 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:16.466 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:16.466 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:16.466 + cd /home/vagrant/spdk_repo 00:03:16.466 + source /etc/os-release 00:03:16.466 ++ NAME='Fedora Linux' 00:03:16.466 ++ VERSION='39 (Cloud Edition)' 00:03:16.466 ++ ID=fedora 00:03:16.466 ++ VERSION_ID=39 00:03:16.466 ++ VERSION_CODENAME= 00:03:16.466 ++ PLATFORM_ID=platform:f39 00:03:16.466 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:16.466 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:16.466 ++ LOGO=fedora-logo-icon 00:03:16.466 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:16.466 ++ HOME_URL=https://fedoraproject.org/ 00:03:16.466 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:16.466 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:16.466 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:16.466 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:16.466 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:16.466 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:16.466 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:16.466 ++ SUPPORT_END=2024-11-12 00:03:16.466 ++ VARIANT='Cloud Edition' 00:03:16.466 ++ VARIANT_ID=cloud 00:03:16.466 + uname -a 00:03:16.466 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:16.466 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:17.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:17.034 Hugepages 00:03:17.034 node hugesize free / total 00:03:17.034 node0 1048576kB 0 / 0 00:03:17.034 node0 2048kB 0 / 0 00:03:17.034 00:03:17.034 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.034 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:17.034 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:17.034 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:17.034 + rm -f /tmp/spdk-ld-path 00:03:17.034 + source autorun-spdk.conf 00:03:17.034 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:17.034 ++ SPDK_RUN_ASAN=1 00:03:17.034 ++ SPDK_RUN_UBSAN=1 00:03:17.034 ++ SPDK_TEST_RAID=1 00:03:17.034 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:17.034 ++ RUN_NIGHTLY=0 00:03:17.034 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:17.034 + [[ -n '' ]] 00:03:17.034 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:17.034 + for M in /var/spdk/build-*-manifest.txt 00:03:17.034 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:17.034 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:17.034 + for M in /var/spdk/build-*-manifest.txt 00:03:17.034 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:17.034 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:17.034 + for M in /var/spdk/build-*-manifest.txt 00:03:17.034 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:17.034 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:17.034 ++ uname 00:03:17.034 + [[ Linux == \L\i\n\u\x ]] 00:03:17.034 + sudo dmesg -T 00:03:17.034 + sudo dmesg --clear 00:03:17.034 + dmesg_pid=5203 00:03:17.034 + sudo dmesg -Tw 00:03:17.034 + [[ Fedora Linux == FreeBSD ]] 00:03:17.034 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:17.034 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:17.034 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:17.034 + [[ -x /usr/src/fio-static/fio ]] 00:03:17.034 + export FIO_BIN=/usr/src/fio-static/fio 00:03:17.034 + FIO_BIN=/usr/src/fio-static/fio 00:03:17.034 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:17.034 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:17.034 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:17.035 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:17.035 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:17.035 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:17.035 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:17.035 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:17.035 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:17.293 08:36:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:17.293 08:36:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:17.293 08:36:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:17.293 08:36:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:17.293 08:36:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:17.293 08:36:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:17.293 08:36:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:17.293 08:36:47 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:17.293 08:36:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:17.293 08:36:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:17.293 08:36:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:17.293 08:36:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:17.293 08:36:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:17.293 08:36:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:17.293 08:36:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:17.293 08:36:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:17.293 08:36:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.293 08:36:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.293 08:36:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.293 08:36:48 -- paths/export.sh@5 -- $ export PATH 00:03:17.293 08:36:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.293 08:36:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:17.293 08:36:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:17.293 08:36:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732091808.XXXXXX 00:03:17.293 08:36:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732091808.HFOPgP 00:03:17.293 08:36:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:17.293 08:36:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:17.293 08:36:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:17.293 08:36:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:17.293 08:36:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:17.293 08:36:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:17.293 08:36:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:17.293 08:36:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.293 08:36:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:17.293 08:36:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:17.293 08:36:48 -- pm/common@17 -- $ local monitor 00:03:17.293 08:36:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.293 08:36:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.293 08:36:48 -- pm/common@25 -- $ sleep 1 00:03:17.293 08:36:48 -- pm/common@21 -- $ date +%s 00:03:17.293 08:36:48 -- pm/common@21 -- $ date +%s 00:03:17.293 08:36:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732091808 00:03:17.293 08:36:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732091808 00:03:17.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732091808_collect-vmstat.pm.log 00:03:17.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732091808_collect-cpu-load.pm.log 00:03:18.232 08:36:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:18.232 08:36:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:18.232 08:36:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:18.232 08:36:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:18.232 08:36:49 -- spdk/autobuild.sh@16 -- $ date -u 00:03:18.232 Wed Nov 20 08:36:49 AM UTC 2024 00:03:18.232 08:36:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:18.232 v25.01-pre-200-g6fc96a60f 00:03:18.232 08:36:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:18.232 08:36:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:18.232 08:36:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:18.232 08:36:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:18.232 08:36:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.232 ************************************ 00:03:18.232 START TEST asan 00:03:18.232 ************************************ 00:03:18.232 using asan 00:03:18.232 08:36:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:18.232 00:03:18.232 real 0m0.000s 00:03:18.232 user 0m0.000s 00:03:18.232 sys 0m0.000s 00:03:18.232 08:36:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:18.232 08:36:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:18.232 ************************************ 00:03:18.232 END TEST asan 00:03:18.232 ************************************ 00:03:18.232 08:36:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:18.232 08:36:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:18.232 08:36:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:18.232 08:36:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:18.232 08:36:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.232 ************************************ 00:03:18.232 START TEST ubsan 00:03:18.232 ************************************ 00:03:18.232 using ubsan 00:03:18.232 08:36:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:18.232 00:03:18.232 real 0m0.000s 00:03:18.232 user 0m0.000s 00:03:18.232 sys 0m0.000s 00:03:18.232 08:36:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:18.232 ************************************ 00:03:18.232 END TEST ubsan 00:03:18.232 08:36:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:18.232 ************************************ 00:03:18.490 08:36:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:18.490 08:36:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:18.490 08:36:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:18.490 08:36:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:18.490 08:36:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:18.490 08:36:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:18.490 08:36:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:18.490 08:36:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:18.490 08:36:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:18.490 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:18.490 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:19.057 Using 'verbs' RDMA provider 00:03:34.878 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:47.088 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:47.347 Creating mk/config.mk...done. 00:03:47.347 Creating mk/cc.flags.mk...done. 00:03:47.348 Type 'make' to build. 00:03:47.348 08:37:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:47.348 08:37:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:47.348 08:37:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:47.348 08:37:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.348 ************************************ 00:03:47.348 START TEST make 00:03:47.348 ************************************ 00:03:47.348 08:37:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:47.915 make[1]: Nothing to be done for 'all'. 00:04:00.187 The Meson build system 00:04:00.187 Version: 1.5.0 00:04:00.187 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:00.187 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:00.187 Build type: native build 00:04:00.187 Program cat found: YES (/usr/bin/cat) 00:04:00.187 Project name: DPDK 00:04:00.187 Project version: 24.03.0 00:04:00.187 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:00.187 C linker for the host machine: cc ld.bfd 2.40-14 00:04:00.187 Host machine cpu family: x86_64 00:04:00.187 Host machine cpu: x86_64 00:04:00.187 Message: ## Building in Developer Mode ## 00:04:00.187 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:00.187 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:00.187 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:00.187 Program python3 found: YES (/usr/bin/python3) 00:04:00.187 Program cat found: YES (/usr/bin/cat) 00:04:00.187 Compiler for C supports arguments -march=native: YES 00:04:00.187 Checking for size of "void *" : 8 00:04:00.187 Checking for size of "void *" : 8 (cached) 00:04:00.187 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:00.187 Library m found: YES 00:04:00.187 Library numa found: YES 00:04:00.187 Has header "numaif.h" : YES 00:04:00.187 Library fdt found: NO 00:04:00.187 Library execinfo found: NO 00:04:00.187 Has header "execinfo.h" : YES 00:04:00.187 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:00.187 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:00.187 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:00.187 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:00.187 Run-time dependency openssl found: YES 3.1.1 00:04:00.187 Run-time dependency libpcap found: YES 1.10.4 00:04:00.187 Has header "pcap.h" with dependency libpcap: YES 00:04:00.187 Compiler for C supports arguments -Wcast-qual: YES 00:04:00.187 Compiler for C supports arguments -Wdeprecated: YES 00:04:00.187 Compiler for C supports arguments -Wformat: YES 00:04:00.187 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:00.187 Compiler for C supports arguments -Wformat-security: NO 00:04:00.187 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:00.187 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:00.187 Compiler for C supports arguments -Wnested-externs: YES 00:04:00.187 Compiler for C supports arguments -Wold-style-definition: YES 00:04:00.187 Compiler for C supports arguments -Wpointer-arith: YES 00:04:00.187 Compiler for C supports arguments -Wsign-compare: YES 00:04:00.187 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:00.187 Compiler for C supports arguments -Wundef: YES 00:04:00.187 Compiler for C supports arguments -Wwrite-strings: YES 00:04:00.187 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:00.187 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:00.187 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:00.187 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:00.187 Program objdump found: YES (/usr/bin/objdump) 00:04:00.187 Compiler for C supports arguments -mavx512f: YES 00:04:00.187 Checking if "AVX512 checking" compiles: YES 00:04:00.187 Fetching value of define "__SSE4_2__" : 1 00:04:00.187 Fetching value of define "__AES__" : 1 00:04:00.187 Fetching value of define "__AVX__" : 1 00:04:00.187 Fetching value of define "__AVX2__" : 1 00:04:00.187 Fetching value of define "__AVX512BW__" : (undefined) 00:04:00.187 Fetching value of define "__AVX512CD__" : (undefined) 00:04:00.187 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:00.187 Fetching value of define "__AVX512F__" : (undefined) 00:04:00.187 Fetching value of define "__AVX512VL__" : (undefined) 00:04:00.187 Fetching value of define "__PCLMUL__" : 1 00:04:00.187 Fetching value of define "__RDRND__" : 1 00:04:00.187 Fetching value of define "__RDSEED__" : 1 00:04:00.187 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:00.187 Fetching value of define "__znver1__" : (undefined) 00:04:00.187 Fetching value of define "__znver2__" : (undefined) 00:04:00.187 Fetching value of define "__znver3__" : (undefined) 00:04:00.187 Fetching value of define "__znver4__" : (undefined) 00:04:00.187 Library asan found: YES 00:04:00.187 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:00.187 Message: lib/log: Defining dependency "log" 00:04:00.187 Message: lib/kvargs: Defining dependency "kvargs" 00:04:00.187 Message: lib/telemetry: Defining dependency "telemetry" 00:04:00.187 Library rt found: YES 00:04:00.187 Checking for function "getentropy" : NO 00:04:00.187 Message: lib/eal: Defining dependency "eal" 00:04:00.187 Message: lib/ring: Defining dependency "ring" 00:04:00.187 Message: lib/rcu: Defining dependency "rcu" 00:04:00.187 Message: lib/mempool: Defining dependency "mempool" 00:04:00.187 Message: lib/mbuf: Defining dependency "mbuf" 00:04:00.187 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:00.187 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:00.187 Compiler for C supports arguments -mpclmul: YES 00:04:00.187 Compiler for C supports arguments -maes: YES 00:04:00.187 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:00.187 Compiler for C supports arguments -mavx512bw: YES 00:04:00.187 Compiler for C supports arguments -mavx512dq: YES 00:04:00.187 Compiler for C supports arguments -mavx512vl: YES 00:04:00.187 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:00.187 Compiler for C supports arguments -mavx2: YES 00:04:00.187 Compiler for C supports arguments -mavx: YES 00:04:00.187 Message: lib/net: Defining dependency "net" 00:04:00.187 Message: lib/meter: Defining dependency "meter" 00:04:00.187 Message: lib/ethdev: Defining dependency "ethdev" 00:04:00.187 Message: lib/pci: Defining dependency "pci" 00:04:00.187 Message: lib/cmdline: Defining dependency "cmdline" 00:04:00.187 Message: lib/hash: Defining dependency "hash" 00:04:00.187 Message: lib/timer: Defining dependency "timer" 00:04:00.187 Message: lib/compressdev: Defining dependency "compressdev" 00:04:00.187 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:00.187 Message: lib/dmadev: Defining dependency "dmadev" 00:04:00.187 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:00.187 Message: lib/power: Defining dependency "power" 00:04:00.187 Message: lib/reorder: Defining dependency "reorder" 00:04:00.187 Message: lib/security: Defining dependency "security" 00:04:00.187 Has header "linux/userfaultfd.h" : YES 00:04:00.187 Has header "linux/vduse.h" : YES 00:04:00.187 Message: lib/vhost: Defining dependency "vhost" 00:04:00.187 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:00.187 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:00.187 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:00.187 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:00.187 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:00.187 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:00.187 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:00.187 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:00.187 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:00.187 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:00.187 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:00.187 Configuring doxy-api-html.conf using configuration 00:04:00.187 Configuring doxy-api-man.conf using configuration 00:04:00.187 Program mandb found: YES (/usr/bin/mandb) 00:04:00.187 Program sphinx-build found: NO 00:04:00.187 Configuring rte_build_config.h using configuration 00:04:00.187 Message: 00:04:00.187 ================= 00:04:00.187 Applications Enabled 00:04:00.187 ================= 00:04:00.187 00:04:00.187 apps: 00:04:00.187 00:04:00.187 00:04:00.187 Message: 00:04:00.187 ================= 00:04:00.187 Libraries Enabled 00:04:00.187 ================= 00:04:00.187 00:04:00.187 libs: 00:04:00.187 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:00.187 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:00.187 cryptodev, dmadev, power, reorder, security, vhost, 00:04:00.187 00:04:00.187 Message: 00:04:00.187 =============== 00:04:00.187 Drivers Enabled 00:04:00.187 =============== 00:04:00.187 00:04:00.187 common: 00:04:00.188 00:04:00.188 bus: 00:04:00.188 pci, vdev, 00:04:00.188 mempool: 00:04:00.188 ring, 00:04:00.188 dma: 00:04:00.188 00:04:00.188 net: 00:04:00.188 00:04:00.188 crypto: 00:04:00.188 00:04:00.188 compress: 00:04:00.188 00:04:00.188 vdpa: 00:04:00.188 00:04:00.188 00:04:00.188 Message: 00:04:00.188 ================= 00:04:00.188 Content Skipped 00:04:00.188 ================= 00:04:00.188 00:04:00.188 apps: 00:04:00.188 dumpcap: explicitly disabled via build config 00:04:00.188 graph: explicitly disabled via build config 00:04:00.188 pdump: explicitly disabled via build config 00:04:00.188 proc-info: explicitly disabled via build config 00:04:00.188 test-acl: explicitly disabled via build config 00:04:00.188 test-bbdev: explicitly disabled via build config 00:04:00.188 test-cmdline: explicitly disabled via build config 00:04:00.188 test-compress-perf: explicitly disabled via build config 00:04:00.188 test-crypto-perf: explicitly disabled via build config 00:04:00.188 test-dma-perf: explicitly disabled via build config 00:04:00.188 test-eventdev: explicitly disabled via build config 00:04:00.188 test-fib: explicitly disabled via build config 00:04:00.188 test-flow-perf: explicitly disabled via build config 00:04:00.188 test-gpudev: explicitly disabled via build config 00:04:00.188 test-mldev: explicitly disabled via build config 00:04:00.188 test-pipeline: explicitly disabled via build config 00:04:00.188 test-pmd: explicitly disabled via build config 00:04:00.188 test-regex: explicitly disabled via build config 00:04:00.188 test-sad: explicitly disabled via build config 00:04:00.188 test-security-perf: explicitly disabled via build config 00:04:00.188 00:04:00.188 libs: 00:04:00.188 argparse: explicitly disabled via build config 00:04:00.188 metrics: explicitly disabled via build config 00:04:00.188 acl: explicitly disabled via build config 00:04:00.188 bbdev: explicitly disabled via build config 00:04:00.188 bitratestats: explicitly disabled via build config 00:04:00.188 bpf: explicitly disabled via build config 00:04:00.188 cfgfile: explicitly disabled via build config 00:04:00.188 distributor: explicitly disabled via build config 00:04:00.188 efd: explicitly disabled via build config 00:04:00.188 eventdev: explicitly disabled via build config 00:04:00.188 dispatcher: explicitly disabled via build config 00:04:00.188 gpudev: explicitly disabled via build config 00:04:00.188 gro: explicitly disabled via build config 00:04:00.188 gso: explicitly disabled via build config 00:04:00.188 ip_frag: explicitly disabled via build config 00:04:00.188 jobstats: explicitly disabled via build config 00:04:00.188 latencystats: explicitly disabled via build config 00:04:00.188 lpm: explicitly disabled via build config 00:04:00.188 member: explicitly disabled via build config 00:04:00.188 pcapng: explicitly disabled via build config 00:04:00.188 rawdev: explicitly disabled via build config 00:04:00.188 regexdev: explicitly disabled via build config 00:04:00.188 mldev: explicitly disabled via build config 00:04:00.188 rib: explicitly disabled via build config 00:04:00.188 sched: explicitly disabled via build config 00:04:00.188 stack: explicitly disabled via build config 00:04:00.188 ipsec: explicitly disabled via build config 00:04:00.188 pdcp: explicitly disabled via build config 00:04:00.188 fib: explicitly disabled via build config 00:04:00.188 port: explicitly disabled via build config 00:04:00.188 pdump: explicitly disabled via build config 00:04:00.188 table: explicitly disabled via build config 00:04:00.188 pipeline: explicitly disabled via build config 00:04:00.188 graph: explicitly disabled via build config 00:04:00.188 node: explicitly disabled via build config 00:04:00.188 00:04:00.188 drivers: 00:04:00.188 common/cpt: not in enabled drivers build config 00:04:00.188 common/dpaax: not in enabled drivers build config 00:04:00.188 common/iavf: not in enabled drivers build config 00:04:00.188 common/idpf: not in enabled drivers build config 00:04:00.188 common/ionic: not in enabled drivers build config 00:04:00.188 common/mvep: not in enabled drivers build config 00:04:00.188 common/octeontx: not in enabled drivers build config 00:04:00.188 bus/auxiliary: not in enabled drivers build config 00:04:00.188 bus/cdx: not in enabled drivers build config 00:04:00.188 bus/dpaa: not in enabled drivers build config 00:04:00.188 bus/fslmc: not in enabled drivers build config 00:04:00.188 bus/ifpga: not in enabled drivers build config 00:04:00.188 bus/platform: not in enabled drivers build config 00:04:00.188 bus/uacce: not in enabled drivers build config 00:04:00.188 bus/vmbus: not in enabled drivers build config 00:04:00.188 common/cnxk: not in enabled drivers build config 00:04:00.188 common/mlx5: not in enabled drivers build config 00:04:00.188 common/nfp: not in enabled drivers build config 00:04:00.188 common/nitrox: not in enabled drivers build config 00:04:00.188 common/qat: not in enabled drivers build config 00:04:00.188 common/sfc_efx: not in enabled drivers build config 00:04:00.188 mempool/bucket: not in enabled drivers build config 00:04:00.188 mempool/cnxk: not in enabled drivers build config 00:04:00.188 mempool/dpaa: not in enabled drivers build config 00:04:00.188 mempool/dpaa2: not in enabled drivers build config 00:04:00.188 mempool/octeontx: not in enabled drivers build config 00:04:00.188 mempool/stack: not in enabled drivers build config 00:04:00.188 dma/cnxk: not in enabled drivers build config 00:04:00.188 dma/dpaa: not in enabled drivers build config 00:04:00.188 dma/dpaa2: not in enabled drivers build config 00:04:00.188 dma/hisilicon: not in enabled drivers build config 00:04:00.188 dma/idxd: not in enabled drivers build config 00:04:00.188 dma/ioat: not in enabled drivers build config 00:04:00.188 dma/skeleton: not in enabled drivers build config 00:04:00.188 net/af_packet: not in enabled drivers build config 00:04:00.188 net/af_xdp: not in enabled drivers build config 00:04:00.188 net/ark: not in enabled drivers build config 00:04:00.188 net/atlantic: not in enabled drivers build config 00:04:00.188 net/avp: not in enabled drivers build config 00:04:00.188 net/axgbe: not in enabled drivers build config 00:04:00.188 net/bnx2x: not in enabled drivers build config 00:04:00.188 net/bnxt: not in enabled drivers build config 00:04:00.188 net/bonding: not in enabled drivers build config 00:04:00.188 net/cnxk: not in enabled drivers build config 00:04:00.188 net/cpfl: not in enabled drivers build config 00:04:00.188 net/cxgbe: not in enabled drivers build config 00:04:00.188 net/dpaa: not in enabled drivers build config 00:04:00.188 net/dpaa2: not in enabled drivers build config 00:04:00.188 net/e1000: not in enabled drivers build config 00:04:00.188 net/ena: not in enabled drivers build config 00:04:00.188 net/enetc: not in enabled drivers build config 00:04:00.188 net/enetfec: not in enabled drivers build config 00:04:00.188 net/enic: not in enabled drivers build config 00:04:00.188 net/failsafe: not in enabled drivers build config 00:04:00.188 net/fm10k: not in enabled drivers build config 00:04:00.188 net/gve: not in enabled drivers build config 00:04:00.188 net/hinic: not in enabled drivers build config 00:04:00.188 net/hns3: not in enabled drivers build config 00:04:00.188 net/i40e: not in enabled drivers build config 00:04:00.188 net/iavf: not in enabled drivers build config 00:04:00.188 net/ice: not in enabled drivers build config 00:04:00.188 net/idpf: not in enabled drivers build config 00:04:00.188 net/igc: not in enabled drivers build config 00:04:00.188 net/ionic: not in enabled drivers build config 00:04:00.188 net/ipn3ke: not in enabled drivers build config 00:04:00.188 net/ixgbe: not in enabled drivers build config 00:04:00.188 net/mana: not in enabled drivers build config 00:04:00.188 net/memif: not in enabled drivers build config 00:04:00.188 net/mlx4: not in enabled drivers build config 00:04:00.188 net/mlx5: not in enabled drivers build config 00:04:00.188 net/mvneta: not in enabled drivers build config 00:04:00.188 net/mvpp2: not in enabled drivers build config 00:04:00.188 net/netvsc: not in enabled drivers build config 00:04:00.188 net/nfb: not in enabled drivers build config 00:04:00.188 net/nfp: not in enabled drivers build config 00:04:00.188 net/ngbe: not in enabled drivers build config 00:04:00.188 net/null: not in enabled drivers build config 00:04:00.188 net/octeontx: not in enabled drivers build config 00:04:00.188 net/octeon_ep: not in enabled drivers build config 00:04:00.188 net/pcap: not in enabled drivers build config 00:04:00.188 net/pfe: not in enabled drivers build config 00:04:00.188 net/qede: not in enabled drivers build config 00:04:00.188 net/ring: not in enabled drivers build config 00:04:00.188 net/sfc: not in enabled drivers build config 00:04:00.188 net/softnic: not in enabled drivers build config 00:04:00.188 net/tap: not in enabled drivers build config 00:04:00.188 net/thunderx: not in enabled drivers build config 00:04:00.188 net/txgbe: not in enabled drivers build config 00:04:00.188 net/vdev_netvsc: not in enabled drivers build config 00:04:00.188 net/vhost: not in enabled drivers build config 00:04:00.188 net/virtio: not in enabled drivers build config 00:04:00.188 net/vmxnet3: not in enabled drivers build config 00:04:00.188 raw/*: missing internal dependency, "rawdev" 00:04:00.188 crypto/armv8: not in enabled drivers build config 00:04:00.188 crypto/bcmfs: not in enabled drivers build config 00:04:00.188 crypto/caam_jr: not in enabled drivers build config 00:04:00.188 crypto/ccp: not in enabled drivers build config 00:04:00.188 crypto/cnxk: not in enabled drivers build config 00:04:00.188 crypto/dpaa_sec: not in enabled drivers build config 00:04:00.188 crypto/dpaa2_sec: not in enabled drivers build config 00:04:00.188 crypto/ipsec_mb: not in enabled drivers build config 00:04:00.188 crypto/mlx5: not in enabled drivers build config 00:04:00.188 crypto/mvsam: not in enabled drivers build config 00:04:00.188 crypto/nitrox: not in enabled drivers build config 00:04:00.188 crypto/null: not in enabled drivers build config 00:04:00.188 crypto/octeontx: not in enabled drivers build config 00:04:00.188 crypto/openssl: not in enabled drivers build config 00:04:00.188 crypto/scheduler: not in enabled drivers build config 00:04:00.188 crypto/uadk: not in enabled drivers build config 00:04:00.188 crypto/virtio: not in enabled drivers build config 00:04:00.188 compress/isal: not in enabled drivers build config 00:04:00.188 compress/mlx5: not in enabled drivers build config 00:04:00.189 compress/nitrox: not in enabled drivers build config 00:04:00.189 compress/octeontx: not in enabled drivers build config 00:04:00.189 compress/zlib: not in enabled drivers build config 00:04:00.189 regex/*: missing internal dependency, "regexdev" 00:04:00.189 ml/*: missing internal dependency, "mldev" 00:04:00.189 vdpa/ifc: not in enabled drivers build config 00:04:00.189 vdpa/mlx5: not in enabled drivers build config 00:04:00.189 vdpa/nfp: not in enabled drivers build config 00:04:00.189 vdpa/sfc: not in enabled drivers build config 00:04:00.189 event/*: missing internal dependency, "eventdev" 00:04:00.189 baseband/*: missing internal dependency, "bbdev" 00:04:00.189 gpu/*: missing internal dependency, "gpudev" 00:04:00.189 00:04:00.189 00:04:00.189 Build targets in project: 85 00:04:00.189 00:04:00.189 DPDK 24.03.0 00:04:00.189 00:04:00.189 User defined options 00:04:00.189 buildtype : debug 00:04:00.189 default_library : shared 00:04:00.189 libdir : lib 00:04:00.189 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:00.189 b_sanitize : address 00:04:00.189 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:00.189 c_link_args : 00:04:00.189 cpu_instruction_set: native 00:04:00.189 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:00.189 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:00.189 enable_docs : false 00:04:00.189 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:00.189 enable_kmods : false 00:04:00.189 max_lcores : 128 00:04:00.189 tests : false 00:04:00.189 00:04:00.189 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.448 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:00.707 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:00.707 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:00.707 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:00.707 [4/268] Linking static target lib/librte_kvargs.a 00:04:00.707 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:00.707 [6/268] Linking static target lib/librte_log.a 00:04:01.275 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.275 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:01.536 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:01.536 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:01.536 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:01.536 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:01.536 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:01.536 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:01.536 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:01.536 [16/268] Linking static target lib/librte_telemetry.a 00:04:01.536 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:01.797 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.797 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:01.797 [20/268] Linking target lib/librte_log.so.24.1 00:04:02.056 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:02.315 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:02.315 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:02.315 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:02.315 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:02.315 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:02.574 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:02.574 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:02.574 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.574 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:02.574 [31/268] Linking target lib/librte_telemetry.so.24.1 00:04:02.574 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:02.574 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:02.833 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:02.833 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:02.833 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:03.092 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:03.351 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:03.351 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:03.351 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:03.351 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:03.351 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:03.610 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:03.610 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:03.610 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:03.610 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:03.610 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:03.870 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:03.870 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:03.870 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:04.438 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:04.438 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:04.438 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:04.438 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:04.438 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:04.696 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:04.696 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:04.696 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:04.696 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:04.696 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:04.956 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:05.215 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:05.215 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:05.215 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:05.215 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:05.474 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:05.733 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:05.733 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:05.733 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:05.733 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:05.733 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:05.992 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:05.992 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:05.992 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:05.992 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:06.251 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:06.251 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:06.251 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:06.251 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:06.510 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:06.510 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:06.510 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:06.769 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:06.769 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:06.769 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:07.028 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:07.028 [87/268] Linking static target lib/librte_eal.a 00:04:07.028 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:07.028 [89/268] Linking static target lib/librte_ring.a 00:04:07.028 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:07.286 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:07.286 [92/268] Linking static target lib/librte_mempool.a 00:04:07.286 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:07.286 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:07.286 [95/268] Linking static target lib/librte_rcu.a 00:04:07.546 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:07.546 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:07.546 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:07.546 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:07.546 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.121 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.121 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:08.121 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:08.121 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:08.121 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:08.121 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:08.121 [107/268] Linking static target lib/librte_mbuf.a 00:04:08.121 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:08.121 [109/268] Linking static target lib/librte_net.a 00:04:08.380 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.638 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:08.638 [112/268] Linking static target lib/librte_meter.a 00:04:08.638 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.638 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:08.946 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:08.946 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:08.946 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.946 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:09.205 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.205 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:09.464 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:09.723 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:09.982 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:09.982 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:10.241 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:10.241 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:10.241 [127/268] Linking static target lib/librte_pci.a 00:04:10.241 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:10.241 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:10.241 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:10.500 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:10.500 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:10.500 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:10.500 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:10.500 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.500 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:10.500 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:10.500 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:10.500 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:10.500 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:10.759 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:10.759 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:10.759 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:10.759 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:10.759 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:11.018 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:11.018 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:11.018 [148/268] Linking static target lib/librte_cmdline.a 00:04:11.278 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:11.278 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:11.537 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:11.537 [152/268] Linking static target lib/librte_timer.a 00:04:11.799 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:11.799 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:12.058 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:12.058 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:12.058 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:12.058 [158/268] Linking static target lib/librte_compressdev.a 00:04:12.058 [159/268] Linking static target lib/librte_ethdev.a 00:04:12.058 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:12.058 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:12.058 [162/268] Linking static target lib/librte_hash.a 00:04:12.316 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.316 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:12.574 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:12.832 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:12.832 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:12.832 [168/268] Linking static target lib/librte_dmadev.a 00:04:12.832 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:12.832 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.091 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:13.091 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:13.091 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.091 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:13.349 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.608 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:13.608 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.608 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:13.867 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:13.867 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:13.867 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:13.867 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:13.867 [183/268] Linking static target lib/librte_cryptodev.a 00:04:13.867 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:14.435 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:14.694 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:14.694 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:14.694 [188/268] Linking static target lib/librte_power.a 00:04:14.694 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:14.694 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:14.694 [191/268] Linking static target lib/librte_security.a 00:04:14.953 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:14.953 [193/268] Linking static target lib/librte_reorder.a 00:04:15.212 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:15.470 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.470 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.037 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:16.037 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.037 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:16.037 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:16.605 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:16.605 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:16.605 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.605 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:16.605 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:16.605 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:16.863 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:17.122 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:17.122 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:17.122 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:17.122 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:17.381 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:17.381 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:17.640 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:17.640 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:17.640 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:17.640 [217/268] Linking static target drivers/librte_bus_vdev.a 00:04:17.640 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:17.640 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:17.640 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:17.640 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:17.898 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:17.898 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:17.898 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:17.898 [225/268] Linking static target drivers/librte_mempool_ring.a 00:04:17.898 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.157 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.728 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:19.295 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.295 [230/268] Linking target lib/librte_eal.so.24.1 00:04:19.553 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:19.553 [232/268] Linking target lib/librte_pci.so.24.1 00:04:19.553 [233/268] Linking target lib/librte_ring.so.24.1 00:04:19.553 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:19.553 [235/268] Linking target lib/librte_timer.so.24.1 00:04:19.553 [236/268] Linking target lib/librte_meter.so.24.1 00:04:19.553 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:19.553 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:19.811 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:19.811 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:19.811 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:19.811 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:19.811 [243/268] Linking target lib/librte_rcu.so.24.1 00:04:19.811 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:19.811 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:19.811 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:19.811 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:20.070 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:20.070 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:20.070 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:20.070 [251/268] Linking target lib/librte_compressdev.so.24.1 00:04:20.070 [252/268] Linking target lib/librte_net.so.24.1 00:04:20.070 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:20.328 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:20.328 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:20.328 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:20.328 [257/268] Linking target lib/librte_hash.so.24.1 00:04:20.328 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:20.328 [259/268] Linking target lib/librte_security.so.24.1 00:04:20.619 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.619 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:20.619 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:20.619 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:20.885 [264/268] Linking target lib/librte_power.so.24.1 00:04:22.788 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:22.788 [266/268] Linking static target lib/librte_vhost.a 00:04:24.693 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.693 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:24.693 INFO: autodetecting backend as ninja 00:04:24.693 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:46.631 CC lib/log/log.o 00:04:46.631 CC lib/ut/ut.o 00:04:46.631 CC lib/log/log_deprecated.o 00:04:46.631 CC lib/ut_mock/mock.o 00:04:46.631 CC lib/log/log_flags.o 00:04:46.631 LIB libspdk_log.a 00:04:46.631 LIB libspdk_ut_mock.a 00:04:46.631 LIB libspdk_ut.a 00:04:46.631 SO libspdk_ut_mock.so.6.0 00:04:46.631 SO libspdk_ut.so.2.0 00:04:46.631 SO libspdk_log.so.7.1 00:04:46.631 SYMLINK libspdk_ut_mock.so 00:04:46.631 SYMLINK libspdk_ut.so 00:04:46.631 SYMLINK libspdk_log.so 00:04:46.631 CC lib/ioat/ioat.o 00:04:46.631 CC lib/dma/dma.o 00:04:46.631 CC lib/util/base64.o 00:04:46.631 CC lib/util/cpuset.o 00:04:46.631 CC lib/util/bit_array.o 00:04:46.631 CC lib/util/crc16.o 00:04:46.631 CC lib/util/crc32.o 00:04:46.631 CC lib/util/crc32c.o 00:04:46.631 CXX lib/trace_parser/trace.o 00:04:46.631 CC lib/vfio_user/host/vfio_user_pci.o 00:04:46.631 CC lib/util/crc32_ieee.o 00:04:46.631 CC lib/vfio_user/host/vfio_user.o 00:04:46.631 CC lib/util/crc64.o 00:04:46.631 CC lib/util/dif.o 00:04:46.631 LIB libspdk_dma.a 00:04:46.631 CC lib/util/fd.o 00:04:46.631 SO libspdk_dma.so.5.0 00:04:46.631 CC lib/util/fd_group.o 00:04:46.631 LIB libspdk_ioat.a 00:04:46.631 CC lib/util/file.o 00:04:46.631 CC lib/util/hexlify.o 00:04:46.631 SO libspdk_ioat.so.7.0 00:04:46.631 SYMLINK libspdk_dma.so 00:04:46.631 CC lib/util/iov.o 00:04:46.631 SYMLINK libspdk_ioat.so 00:04:46.631 CC lib/util/math.o 00:04:46.631 CC lib/util/net.o 00:04:46.631 CC lib/util/pipe.o 00:04:46.631 LIB libspdk_vfio_user.a 00:04:46.631 CC lib/util/strerror_tls.o 00:04:46.631 SO libspdk_vfio_user.so.5.0 00:04:46.631 CC lib/util/string.o 00:04:46.631 SYMLINK libspdk_vfio_user.so 00:04:46.631 CC lib/util/uuid.o 00:04:46.631 CC lib/util/xor.o 00:04:46.631 CC lib/util/zipf.o 00:04:46.631 CC lib/util/md5.o 00:04:46.631 LIB libspdk_util.a 00:04:46.631 SO libspdk_util.so.10.1 00:04:46.631 SYMLINK libspdk_util.so 00:04:46.631 LIB libspdk_trace_parser.a 00:04:46.631 SO libspdk_trace_parser.so.6.0 00:04:46.631 SYMLINK libspdk_trace_parser.so 00:04:46.631 CC lib/idxd/idxd.o 00:04:46.631 CC lib/idxd/idxd_user.o 00:04:46.631 CC lib/idxd/idxd_kernel.o 00:04:46.631 CC lib/conf/conf.o 00:04:46.631 CC lib/rdma_utils/rdma_utils.o 00:04:46.631 CC lib/vmd/vmd.o 00:04:46.631 CC lib/vmd/led.o 00:04:46.631 CC lib/json/json_parse.o 00:04:46.631 CC lib/json/json_util.o 00:04:46.631 CC lib/env_dpdk/env.o 00:04:46.631 CC lib/env_dpdk/memory.o 00:04:46.631 CC lib/env_dpdk/pci.o 00:04:46.631 LIB libspdk_conf.a 00:04:46.631 CC lib/env_dpdk/init.o 00:04:46.890 SO libspdk_conf.so.6.0 00:04:46.890 CC lib/env_dpdk/threads.o 00:04:46.890 CC lib/json/json_write.o 00:04:46.890 LIB libspdk_rdma_utils.a 00:04:46.890 SYMLINK libspdk_conf.so 00:04:46.890 CC lib/env_dpdk/pci_ioat.o 00:04:46.890 SO libspdk_rdma_utils.so.1.0 00:04:46.890 SYMLINK libspdk_rdma_utils.so 00:04:46.890 CC lib/env_dpdk/pci_virtio.o 00:04:46.890 CC lib/env_dpdk/pci_vmd.o 00:04:47.150 CC lib/env_dpdk/pci_idxd.o 00:04:47.150 CC lib/env_dpdk/pci_event.o 00:04:47.150 LIB libspdk_json.a 00:04:47.150 CC lib/env_dpdk/sigbus_handler.o 00:04:47.150 CC lib/rdma_provider/common.o 00:04:47.150 SO libspdk_json.so.6.0 00:04:47.150 CC lib/env_dpdk/pci_dpdk.o 00:04:47.150 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:47.150 SYMLINK libspdk_json.so 00:04:47.150 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:47.409 LIB libspdk_idxd.a 00:04:47.409 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:47.409 SO libspdk_idxd.so.12.1 00:04:47.409 LIB libspdk_vmd.a 00:04:47.409 SO libspdk_vmd.so.6.0 00:04:47.409 SYMLINK libspdk_idxd.so 00:04:47.409 CC lib/jsonrpc/jsonrpc_server.o 00:04:47.409 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:47.409 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:47.409 CC lib/jsonrpc/jsonrpc_client.o 00:04:47.409 SYMLINK libspdk_vmd.so 00:04:47.668 LIB libspdk_rdma_provider.a 00:04:47.668 SO libspdk_rdma_provider.so.7.0 00:04:47.668 SYMLINK libspdk_rdma_provider.so 00:04:47.668 LIB libspdk_jsonrpc.a 00:04:47.927 SO libspdk_jsonrpc.so.6.0 00:04:47.927 SYMLINK libspdk_jsonrpc.so 00:04:48.186 CC lib/rpc/rpc.o 00:04:48.444 LIB libspdk_env_dpdk.a 00:04:48.444 LIB libspdk_rpc.a 00:04:48.444 SO libspdk_rpc.so.6.0 00:04:48.444 SO libspdk_env_dpdk.so.15.1 00:04:48.703 SYMLINK libspdk_rpc.so 00:04:48.703 SYMLINK libspdk_env_dpdk.so 00:04:48.703 CC lib/keyring/keyring.o 00:04:48.703 CC lib/keyring/keyring_rpc.o 00:04:48.703 CC lib/notify/notify_rpc.o 00:04:48.703 CC lib/notify/notify.o 00:04:48.703 CC lib/trace/trace.o 00:04:48.703 CC lib/trace/trace_flags.o 00:04:48.703 CC lib/trace/trace_rpc.o 00:04:48.963 LIB libspdk_notify.a 00:04:48.963 SO libspdk_notify.so.6.0 00:04:49.222 SYMLINK libspdk_notify.so 00:04:49.222 LIB libspdk_keyring.a 00:04:49.222 LIB libspdk_trace.a 00:04:49.222 SO libspdk_keyring.so.2.0 00:04:49.222 SO libspdk_trace.so.11.0 00:04:49.222 SYMLINK libspdk_keyring.so 00:04:49.222 SYMLINK libspdk_trace.so 00:04:49.482 CC lib/sock/sock.o 00:04:49.482 CC lib/sock/sock_rpc.o 00:04:49.482 CC lib/thread/thread.o 00:04:49.482 CC lib/thread/iobuf.o 00:04:50.420 LIB libspdk_sock.a 00:04:50.420 SO libspdk_sock.so.10.0 00:04:50.420 SYMLINK libspdk_sock.so 00:04:50.679 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:50.679 CC lib/nvme/nvme_ctrlr.o 00:04:50.679 CC lib/nvme/nvme_fabric.o 00:04:50.679 CC lib/nvme/nvme_ns_cmd.o 00:04:50.679 CC lib/nvme/nvme_ns.o 00:04:50.679 CC lib/nvme/nvme_pcie.o 00:04:50.679 CC lib/nvme/nvme_pcie_common.o 00:04:50.679 CC lib/nvme/nvme_qpair.o 00:04:50.679 CC lib/nvme/nvme.o 00:04:51.641 CC lib/nvme/nvme_quirks.o 00:04:51.641 CC lib/nvme/nvme_transport.o 00:04:51.641 CC lib/nvme/nvme_discovery.o 00:04:51.641 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:51.641 LIB libspdk_thread.a 00:04:51.641 SO libspdk_thread.so.11.0 00:04:51.641 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:51.900 CC lib/nvme/nvme_tcp.o 00:04:51.900 SYMLINK libspdk_thread.so 00:04:51.900 CC lib/accel/accel.o 00:04:51.900 CC lib/blob/blobstore.o 00:04:52.159 CC lib/nvme/nvme_opal.o 00:04:52.159 CC lib/init/json_config.o 00:04:52.159 CC lib/init/subsystem.o 00:04:52.159 CC lib/init/subsystem_rpc.o 00:04:52.418 CC lib/accel/accel_rpc.o 00:04:52.418 CC lib/accel/accel_sw.o 00:04:52.418 CC lib/init/rpc.o 00:04:52.418 CC lib/nvme/nvme_io_msg.o 00:04:52.418 CC lib/blob/request.o 00:04:52.678 LIB libspdk_init.a 00:04:52.678 SO libspdk_init.so.6.0 00:04:52.678 CC lib/nvme/nvme_poll_group.o 00:04:52.678 SYMLINK libspdk_init.so 00:04:52.937 CC lib/virtio/virtio.o 00:04:52.937 CC lib/fsdev/fsdev.o 00:04:52.937 CC lib/blob/zeroes.o 00:04:52.937 CC lib/event/app.o 00:04:53.196 CC lib/fsdev/fsdev_io.o 00:04:53.196 CC lib/fsdev/fsdev_rpc.o 00:04:53.196 CC lib/virtio/virtio_vhost_user.o 00:04:53.455 CC lib/blob/blob_bs_dev.o 00:04:53.455 LIB libspdk_accel.a 00:04:53.455 SO libspdk_accel.so.16.0 00:04:53.455 SYMLINK libspdk_accel.so 00:04:53.455 CC lib/virtio/virtio_vfio_user.o 00:04:53.714 CC lib/virtio/virtio_pci.o 00:04:53.714 CC lib/nvme/nvme_zns.o 00:04:53.714 CC lib/event/reactor.o 00:04:53.714 CC lib/event/log_rpc.o 00:04:53.714 CC lib/nvme/nvme_stubs.o 00:04:53.714 CC lib/bdev/bdev.o 00:04:53.714 CC lib/bdev/bdev_rpc.o 00:04:53.714 LIB libspdk_fsdev.a 00:04:53.714 SO libspdk_fsdev.so.2.0 00:04:53.714 CC lib/event/app_rpc.o 00:04:53.974 CC lib/nvme/nvme_auth.o 00:04:53.974 SYMLINK libspdk_fsdev.so 00:04:53.974 CC lib/bdev/bdev_zone.o 00:04:53.974 LIB libspdk_virtio.a 00:04:53.974 SO libspdk_virtio.so.7.0 00:04:53.974 SYMLINK libspdk_virtio.so 00:04:54.233 CC lib/bdev/part.o 00:04:54.233 CC lib/bdev/scsi_nvme.o 00:04:54.233 CC lib/event/scheduler_static.o 00:04:54.233 CC lib/nvme/nvme_cuse.o 00:04:54.233 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:54.233 CC lib/nvme/nvme_rdma.o 00:04:54.233 LIB libspdk_event.a 00:04:54.492 SO libspdk_event.so.14.0 00:04:54.492 SYMLINK libspdk_event.so 00:04:55.060 LIB libspdk_fuse_dispatcher.a 00:04:55.060 SO libspdk_fuse_dispatcher.so.1.0 00:04:55.060 SYMLINK libspdk_fuse_dispatcher.so 00:04:55.997 LIB libspdk_nvme.a 00:04:56.256 SO libspdk_nvme.so.15.0 00:04:56.256 LIB libspdk_blob.a 00:04:56.515 SO libspdk_blob.so.11.0 00:04:56.515 SYMLINK libspdk_nvme.so 00:04:56.515 SYMLINK libspdk_blob.so 00:04:56.774 CC lib/lvol/lvol.o 00:04:56.774 CC lib/blobfs/tree.o 00:04:56.774 CC lib/blobfs/blobfs.o 00:04:57.342 LIB libspdk_bdev.a 00:04:57.601 SO libspdk_bdev.so.17.0 00:04:57.601 SYMLINK libspdk_bdev.so 00:04:57.860 CC lib/nvmf/ctrlr.o 00:04:57.860 CC lib/scsi/dev.o 00:04:57.860 CC lib/nvmf/ctrlr_discovery.o 00:04:57.860 CC lib/nvmf/ctrlr_bdev.o 00:04:57.860 CC lib/nbd/nbd.o 00:04:57.860 CC lib/ftl/ftl_core.o 00:04:57.860 CC lib/nbd/nbd_rpc.o 00:04:57.860 CC lib/ublk/ublk.o 00:04:58.119 LIB libspdk_blobfs.a 00:04:58.119 SO libspdk_blobfs.so.10.0 00:04:58.119 CC lib/nvmf/subsystem.o 00:04:58.119 CC lib/scsi/lun.o 00:04:58.119 LIB libspdk_lvol.a 00:04:58.119 SYMLINK libspdk_blobfs.so 00:04:58.119 CC lib/scsi/port.o 00:04:58.119 SO libspdk_lvol.so.10.0 00:04:58.119 SYMLINK libspdk_lvol.so 00:04:58.119 CC lib/scsi/scsi.o 00:04:58.378 CC lib/nvmf/nvmf.o 00:04:58.378 CC lib/ftl/ftl_init.o 00:04:58.378 LIB libspdk_nbd.a 00:04:58.378 CC lib/scsi/scsi_bdev.o 00:04:58.378 SO libspdk_nbd.so.7.0 00:04:58.378 CC lib/ftl/ftl_layout.o 00:04:58.378 CC lib/ftl/ftl_debug.o 00:04:58.378 SYMLINK libspdk_nbd.so 00:04:58.378 CC lib/scsi/scsi_pr.o 00:04:58.645 CC lib/scsi/scsi_rpc.o 00:04:58.645 CC lib/ublk/ublk_rpc.o 00:04:58.645 CC lib/nvmf/nvmf_rpc.o 00:04:58.645 CC lib/nvmf/transport.o 00:04:58.924 CC lib/scsi/task.o 00:04:58.924 CC lib/ftl/ftl_io.o 00:04:58.924 CC lib/ftl/ftl_sb.o 00:04:58.924 LIB libspdk_ublk.a 00:04:58.924 SO libspdk_ublk.so.3.0 00:04:58.924 CC lib/ftl/ftl_l2p.o 00:04:58.924 SYMLINK libspdk_ublk.so 00:04:58.924 LIB libspdk_scsi.a 00:04:58.924 CC lib/ftl/ftl_l2p_flat.o 00:04:59.183 SO libspdk_scsi.so.9.0 00:04:59.183 CC lib/ftl/ftl_nv_cache.o 00:04:59.183 CC lib/ftl/ftl_band.o 00:04:59.183 SYMLINK libspdk_scsi.so 00:04:59.183 CC lib/ftl/ftl_band_ops.o 00:04:59.183 CC lib/nvmf/tcp.o 00:04:59.183 CC lib/ftl/ftl_writer.o 00:04:59.441 CC lib/nvmf/stubs.o 00:04:59.700 CC lib/ftl/ftl_rq.o 00:04:59.700 CC lib/ftl/ftl_reloc.o 00:04:59.700 CC lib/ftl/ftl_l2p_cache.o 00:04:59.700 CC lib/nvmf/mdns_server.o 00:04:59.700 CC lib/ftl/ftl_p2l.o 00:04:59.960 CC lib/ftl/ftl_p2l_log.o 00:04:59.960 CC lib/iscsi/conn.o 00:04:59.960 CC lib/iscsi/init_grp.o 00:04:59.960 CC lib/nvmf/rdma.o 00:05:00.218 CC lib/ftl/mngt/ftl_mngt.o 00:05:00.218 CC lib/nvmf/auth.o 00:05:00.218 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:00.218 CC lib/vhost/vhost.o 00:05:00.478 CC lib/vhost/vhost_rpc.o 00:05:00.478 CC lib/vhost/vhost_scsi.o 00:05:00.478 CC lib/vhost/vhost_blk.o 00:05:00.478 CC lib/vhost/rte_vhost_user.o 00:05:00.738 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:00.738 CC lib/iscsi/iscsi.o 00:05:00.995 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:00.996 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:00.996 CC lib/iscsi/param.o 00:05:01.254 CC lib/iscsi/portal_grp.o 00:05:01.254 CC lib/iscsi/tgt_node.o 00:05:01.512 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:01.512 CC lib/iscsi/iscsi_subsystem.o 00:05:01.512 CC lib/iscsi/iscsi_rpc.o 00:05:01.512 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:01.512 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:01.770 CC lib/iscsi/task.o 00:05:01.770 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:01.770 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:01.770 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:01.770 LIB libspdk_vhost.a 00:05:01.770 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:01.770 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:01.770 SO libspdk_vhost.so.8.0 00:05:02.029 CC lib/ftl/utils/ftl_conf.o 00:05:02.029 CC lib/ftl/utils/ftl_md.o 00:05:02.029 SYMLINK libspdk_vhost.so 00:05:02.029 CC lib/ftl/utils/ftl_mempool.o 00:05:02.029 CC lib/ftl/utils/ftl_bitmap.o 00:05:02.029 CC lib/ftl/utils/ftl_property.o 00:05:02.029 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:02.029 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:02.287 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:02.287 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:02.287 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:02.287 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:02.287 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:02.287 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:02.545 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:02.545 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:02.545 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:02.545 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:02.545 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:02.545 CC lib/ftl/base/ftl_base_dev.o 00:05:02.545 CC lib/ftl/base/ftl_base_bdev.o 00:05:02.545 CC lib/ftl/ftl_trace.o 00:05:02.803 LIB libspdk_iscsi.a 00:05:02.803 SO libspdk_iscsi.so.8.0 00:05:02.803 LIB libspdk_ftl.a 00:05:03.060 SYMLINK libspdk_iscsi.so 00:05:03.060 LIB libspdk_nvmf.a 00:05:03.318 SO libspdk_nvmf.so.20.0 00:05:03.318 SO libspdk_ftl.so.9.0 00:05:03.576 SYMLINK libspdk_nvmf.so 00:05:03.576 SYMLINK libspdk_ftl.so 00:05:03.834 CC module/env_dpdk/env_dpdk_rpc.o 00:05:04.093 CC module/accel/error/accel_error.o 00:05:04.093 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:04.093 CC module/keyring/file/keyring.o 00:05:04.093 CC module/keyring/linux/keyring.o 00:05:04.093 CC module/blob/bdev/blob_bdev.o 00:05:04.093 CC module/accel/ioat/accel_ioat.o 00:05:04.093 CC module/fsdev/aio/fsdev_aio.o 00:05:04.093 CC module/accel/dsa/accel_dsa.o 00:05:04.093 CC module/sock/posix/posix.o 00:05:04.093 LIB libspdk_env_dpdk_rpc.a 00:05:04.093 SO libspdk_env_dpdk_rpc.so.6.0 00:05:04.093 CC module/keyring/linux/keyring_rpc.o 00:05:04.093 CC module/keyring/file/keyring_rpc.o 00:05:04.093 SYMLINK libspdk_env_dpdk_rpc.so 00:05:04.093 CC module/accel/error/accel_error_rpc.o 00:05:04.351 CC module/accel/ioat/accel_ioat_rpc.o 00:05:04.351 LIB libspdk_keyring_linux.a 00:05:04.351 LIB libspdk_scheduler_dynamic.a 00:05:04.351 LIB libspdk_keyring_file.a 00:05:04.351 SO libspdk_keyring_linux.so.1.0 00:05:04.351 SO libspdk_scheduler_dynamic.so.4.0 00:05:04.351 SO libspdk_keyring_file.so.2.0 00:05:04.351 CC module/accel/dsa/accel_dsa_rpc.o 00:05:04.351 LIB libspdk_blob_bdev.a 00:05:04.351 LIB libspdk_accel_error.a 00:05:04.351 LIB libspdk_accel_ioat.a 00:05:04.351 SO libspdk_blob_bdev.so.11.0 00:05:04.351 SO libspdk_accel_error.so.2.0 00:05:04.351 SYMLINK libspdk_keyring_linux.so 00:05:04.351 SYMLINK libspdk_scheduler_dynamic.so 00:05:04.351 SO libspdk_accel_ioat.so.6.0 00:05:04.351 SYMLINK libspdk_keyring_file.so 00:05:04.351 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:04.351 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:04.351 SYMLINK libspdk_blob_bdev.so 00:05:04.351 SYMLINK libspdk_accel_error.so 00:05:04.351 CC module/fsdev/aio/linux_aio_mgr.o 00:05:04.609 SYMLINK libspdk_accel_ioat.so 00:05:04.609 LIB libspdk_accel_dsa.a 00:05:04.609 SO libspdk_accel_dsa.so.5.0 00:05:04.609 LIB libspdk_scheduler_dpdk_governor.a 00:05:04.609 SYMLINK libspdk_accel_dsa.so 00:05:04.609 CC module/accel/iaa/accel_iaa.o 00:05:04.609 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:04.609 CC module/accel/iaa/accel_iaa_rpc.o 00:05:04.609 CC module/scheduler/gscheduler/gscheduler.o 00:05:04.867 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:04.867 CC module/bdev/delay/vbdev_delay.o 00:05:04.867 CC module/blobfs/bdev/blobfs_bdev.o 00:05:04.867 CC module/bdev/error/vbdev_error.o 00:05:04.867 CC module/bdev/gpt/gpt.o 00:05:04.867 CC module/bdev/error/vbdev_error_rpc.o 00:05:04.867 LIB libspdk_fsdev_aio.a 00:05:04.867 LIB libspdk_scheduler_gscheduler.a 00:05:04.867 LIB libspdk_accel_iaa.a 00:05:04.867 CC module/bdev/lvol/vbdev_lvol.o 00:05:04.867 SO libspdk_scheduler_gscheduler.so.4.0 00:05:04.867 SO libspdk_fsdev_aio.so.1.0 00:05:04.868 SO libspdk_accel_iaa.so.3.0 00:05:04.868 LIB libspdk_sock_posix.a 00:05:05.124 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:05.124 SYMLINK libspdk_scheduler_gscheduler.so 00:05:05.124 SO libspdk_sock_posix.so.6.0 00:05:05.124 SYMLINK libspdk_accel_iaa.so 00:05:05.124 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:05.124 SYMLINK libspdk_fsdev_aio.so 00:05:05.124 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:05.124 CC module/bdev/gpt/vbdev_gpt.o 00:05:05.124 SYMLINK libspdk_sock_posix.so 00:05:05.124 LIB libspdk_bdev_error.a 00:05:05.124 LIB libspdk_blobfs_bdev.a 00:05:05.124 CC module/bdev/malloc/bdev_malloc.o 00:05:05.124 SO libspdk_bdev_error.so.6.0 00:05:05.124 CC module/bdev/null/bdev_null.o 00:05:05.124 CC module/bdev/null/bdev_null_rpc.o 00:05:05.124 SO libspdk_blobfs_bdev.so.6.0 00:05:05.124 LIB libspdk_bdev_delay.a 00:05:05.380 SYMLINK libspdk_bdev_error.so 00:05:05.380 SO libspdk_bdev_delay.so.6.0 00:05:05.380 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:05.380 SYMLINK libspdk_blobfs_bdev.so 00:05:05.380 CC module/bdev/nvme/bdev_nvme.o 00:05:05.380 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:05.380 SYMLINK libspdk_bdev_delay.so 00:05:05.380 LIB libspdk_bdev_gpt.a 00:05:05.380 SO libspdk_bdev_gpt.so.6.0 00:05:05.638 SYMLINK libspdk_bdev_gpt.so 00:05:05.638 CC module/bdev/passthru/vbdev_passthru.o 00:05:05.638 CC module/bdev/nvme/nvme_rpc.o 00:05:05.638 LIB libspdk_bdev_null.a 00:05:05.638 LIB libspdk_bdev_lvol.a 00:05:05.638 SO libspdk_bdev_null.so.6.0 00:05:05.638 CC module/bdev/raid/bdev_raid.o 00:05:05.638 SO libspdk_bdev_lvol.so.6.0 00:05:05.638 SYMLINK libspdk_bdev_null.so 00:05:05.638 CC module/bdev/raid/bdev_raid_rpc.o 00:05:05.638 CC module/bdev/split/vbdev_split.o 00:05:05.638 LIB libspdk_bdev_malloc.a 00:05:05.638 SYMLINK libspdk_bdev_lvol.so 00:05:05.638 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:05.638 SO libspdk_bdev_malloc.so.6.0 00:05:05.896 SYMLINK libspdk_bdev_malloc.so 00:05:05.896 CC module/bdev/nvme/bdev_mdns_client.o 00:05:05.896 CC module/bdev/nvme/vbdev_opal.o 00:05:05.896 CC module/bdev/aio/bdev_aio.o 00:05:05.896 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:05.896 CC module/bdev/raid/bdev_raid_sb.o 00:05:05.896 CC module/bdev/split/vbdev_split_rpc.o 00:05:05.896 CC module/bdev/raid/raid0.o 00:05:06.153 LIB libspdk_bdev_passthru.a 00:05:06.153 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:06.153 LIB libspdk_bdev_split.a 00:05:06.153 SO libspdk_bdev_passthru.so.6.0 00:05:06.153 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:06.153 SO libspdk_bdev_split.so.6.0 00:05:06.153 SYMLINK libspdk_bdev_passthru.so 00:05:06.153 CC module/bdev/raid/raid1.o 00:05:06.153 SYMLINK libspdk_bdev_split.so 00:05:06.153 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:06.153 LIB libspdk_bdev_zone_block.a 00:05:06.153 CC module/bdev/ftl/bdev_ftl.o 00:05:06.453 CC module/bdev/aio/bdev_aio_rpc.o 00:05:06.453 SO libspdk_bdev_zone_block.so.6.0 00:05:06.453 SYMLINK libspdk_bdev_zone_block.so 00:05:06.453 CC module/bdev/raid/concat.o 00:05:06.453 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:06.453 CC module/bdev/raid/raid5f.o 00:05:06.453 CC module/bdev/iscsi/bdev_iscsi.o 00:05:06.453 LIB libspdk_bdev_aio.a 00:05:06.453 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:06.453 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:06.453 SO libspdk_bdev_aio.so.6.0 00:05:06.711 SYMLINK libspdk_bdev_aio.so 00:05:06.711 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:06.711 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:06.711 LIB libspdk_bdev_ftl.a 00:05:06.711 SO libspdk_bdev_ftl.so.6.0 00:05:06.711 SYMLINK libspdk_bdev_ftl.so 00:05:06.970 LIB libspdk_bdev_iscsi.a 00:05:06.970 SO libspdk_bdev_iscsi.so.6.0 00:05:06.970 SYMLINK libspdk_bdev_iscsi.so 00:05:06.970 LIB libspdk_bdev_raid.a 00:05:07.227 SO libspdk_bdev_raid.so.6.0 00:05:07.227 LIB libspdk_bdev_virtio.a 00:05:07.227 SO libspdk_bdev_virtio.so.6.0 00:05:07.227 SYMLINK libspdk_bdev_raid.so 00:05:07.227 SYMLINK libspdk_bdev_virtio.so 00:05:09.123 LIB libspdk_bdev_nvme.a 00:05:09.123 SO libspdk_bdev_nvme.so.7.1 00:05:09.123 SYMLINK libspdk_bdev_nvme.so 00:05:09.687 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:09.687 CC module/event/subsystems/iobuf/iobuf.o 00:05:09.687 CC module/event/subsystems/vmd/vmd.o 00:05:09.687 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:09.687 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:09.687 CC module/event/subsystems/scheduler/scheduler.o 00:05:09.687 CC module/event/subsystems/fsdev/fsdev.o 00:05:09.687 CC module/event/subsystems/sock/sock.o 00:05:09.687 CC module/event/subsystems/keyring/keyring.o 00:05:09.687 LIB libspdk_event_scheduler.a 00:05:09.687 LIB libspdk_event_vhost_blk.a 00:05:09.687 LIB libspdk_event_fsdev.a 00:05:09.687 LIB libspdk_event_vmd.a 00:05:09.687 LIB libspdk_event_keyring.a 00:05:09.945 SO libspdk_event_scheduler.so.4.0 00:05:09.945 LIB libspdk_event_sock.a 00:05:09.945 LIB libspdk_event_iobuf.a 00:05:09.945 SO libspdk_event_vhost_blk.so.3.0 00:05:09.945 SO libspdk_event_fsdev.so.1.0 00:05:09.945 SO libspdk_event_keyring.so.1.0 00:05:09.945 SO libspdk_event_vmd.so.6.0 00:05:09.945 SO libspdk_event_sock.so.5.0 00:05:09.945 SO libspdk_event_iobuf.so.3.0 00:05:09.945 SYMLINK libspdk_event_scheduler.so 00:05:09.945 SYMLINK libspdk_event_vhost_blk.so 00:05:09.945 SYMLINK libspdk_event_keyring.so 00:05:09.945 SYMLINK libspdk_event_vmd.so 00:05:09.945 SYMLINK libspdk_event_fsdev.so 00:05:09.945 SYMLINK libspdk_event_sock.so 00:05:09.945 SYMLINK libspdk_event_iobuf.so 00:05:10.202 CC module/event/subsystems/accel/accel.o 00:05:10.460 LIB libspdk_event_accel.a 00:05:10.460 SO libspdk_event_accel.so.6.0 00:05:10.460 SYMLINK libspdk_event_accel.so 00:05:10.719 CC module/event/subsystems/bdev/bdev.o 00:05:10.978 LIB libspdk_event_bdev.a 00:05:10.978 SO libspdk_event_bdev.so.6.0 00:05:10.978 SYMLINK libspdk_event_bdev.so 00:05:11.236 CC module/event/subsystems/ublk/ublk.o 00:05:11.236 CC module/event/subsystems/nbd/nbd.o 00:05:11.236 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:11.236 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:11.236 CC module/event/subsystems/scsi/scsi.o 00:05:11.494 LIB libspdk_event_ublk.a 00:05:11.494 LIB libspdk_event_nbd.a 00:05:11.494 SO libspdk_event_ublk.so.3.0 00:05:11.494 LIB libspdk_event_scsi.a 00:05:11.494 SO libspdk_event_nbd.so.6.0 00:05:11.494 SO libspdk_event_scsi.so.6.0 00:05:11.494 SYMLINK libspdk_event_ublk.so 00:05:11.494 SYMLINK libspdk_event_nbd.so 00:05:11.494 LIB libspdk_event_nvmf.a 00:05:11.494 SYMLINK libspdk_event_scsi.so 00:05:11.752 SO libspdk_event_nvmf.so.6.0 00:05:11.752 SYMLINK libspdk_event_nvmf.so 00:05:11.752 CC module/event/subsystems/iscsi/iscsi.o 00:05:11.752 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:12.012 LIB libspdk_event_vhost_scsi.a 00:05:12.012 SO libspdk_event_vhost_scsi.so.3.0 00:05:12.012 LIB libspdk_event_iscsi.a 00:05:12.012 SO libspdk_event_iscsi.so.6.0 00:05:12.376 SYMLINK libspdk_event_vhost_scsi.so 00:05:12.376 SYMLINK libspdk_event_iscsi.so 00:05:12.376 SO libspdk.so.6.0 00:05:12.376 SYMLINK libspdk.so 00:05:12.635 CXX app/trace/trace.o 00:05:12.635 CC test/rpc_client/rpc_client_test.o 00:05:12.635 TEST_HEADER include/spdk/accel.h 00:05:12.635 TEST_HEADER include/spdk/accel_module.h 00:05:12.635 TEST_HEADER include/spdk/assert.h 00:05:12.635 TEST_HEADER include/spdk/barrier.h 00:05:12.635 TEST_HEADER include/spdk/base64.h 00:05:12.635 TEST_HEADER include/spdk/bdev.h 00:05:12.635 TEST_HEADER include/spdk/bdev_module.h 00:05:12.635 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:12.635 TEST_HEADER include/spdk/bdev_zone.h 00:05:12.635 TEST_HEADER include/spdk/bit_array.h 00:05:12.635 TEST_HEADER include/spdk/bit_pool.h 00:05:12.635 TEST_HEADER include/spdk/blob_bdev.h 00:05:12.635 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:12.635 TEST_HEADER include/spdk/blobfs.h 00:05:12.635 TEST_HEADER include/spdk/blob.h 00:05:12.635 TEST_HEADER include/spdk/conf.h 00:05:12.635 TEST_HEADER include/spdk/config.h 00:05:12.635 TEST_HEADER include/spdk/cpuset.h 00:05:12.635 TEST_HEADER include/spdk/crc16.h 00:05:12.635 TEST_HEADER include/spdk/crc32.h 00:05:12.636 TEST_HEADER include/spdk/crc64.h 00:05:12.636 TEST_HEADER include/spdk/dif.h 00:05:12.636 TEST_HEADER include/spdk/dma.h 00:05:12.636 TEST_HEADER include/spdk/endian.h 00:05:12.636 TEST_HEADER include/spdk/env_dpdk.h 00:05:12.636 TEST_HEADER include/spdk/env.h 00:05:12.636 TEST_HEADER include/spdk/event.h 00:05:12.636 TEST_HEADER include/spdk/fd_group.h 00:05:12.636 TEST_HEADER include/spdk/fd.h 00:05:12.636 TEST_HEADER include/spdk/file.h 00:05:12.636 TEST_HEADER include/spdk/fsdev.h 00:05:12.636 TEST_HEADER include/spdk/fsdev_module.h 00:05:12.636 TEST_HEADER include/spdk/ftl.h 00:05:12.636 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:12.636 CC test/thread/poller_perf/poller_perf.o 00:05:12.636 CC examples/ioat/perf/perf.o 00:05:12.636 CC examples/util/zipf/zipf.o 00:05:12.636 TEST_HEADER include/spdk/gpt_spec.h 00:05:12.636 TEST_HEADER include/spdk/hexlify.h 00:05:12.636 TEST_HEADER include/spdk/histogram_data.h 00:05:12.636 TEST_HEADER include/spdk/idxd.h 00:05:12.894 TEST_HEADER include/spdk/idxd_spec.h 00:05:12.894 TEST_HEADER include/spdk/init.h 00:05:12.894 TEST_HEADER include/spdk/ioat.h 00:05:12.894 TEST_HEADER include/spdk/ioat_spec.h 00:05:12.894 TEST_HEADER include/spdk/iscsi_spec.h 00:05:12.894 TEST_HEADER include/spdk/json.h 00:05:12.894 CC test/dma/test_dma/test_dma.o 00:05:12.894 CC test/app/bdev_svc/bdev_svc.o 00:05:12.894 TEST_HEADER include/spdk/jsonrpc.h 00:05:12.894 TEST_HEADER include/spdk/keyring.h 00:05:12.894 TEST_HEADER include/spdk/keyring_module.h 00:05:12.894 TEST_HEADER include/spdk/likely.h 00:05:12.894 TEST_HEADER include/spdk/log.h 00:05:12.894 TEST_HEADER include/spdk/lvol.h 00:05:12.894 TEST_HEADER include/spdk/md5.h 00:05:12.894 TEST_HEADER include/spdk/memory.h 00:05:12.894 TEST_HEADER include/spdk/mmio.h 00:05:12.894 TEST_HEADER include/spdk/nbd.h 00:05:12.894 TEST_HEADER include/spdk/net.h 00:05:12.894 TEST_HEADER include/spdk/notify.h 00:05:12.894 TEST_HEADER include/spdk/nvme.h 00:05:12.894 TEST_HEADER include/spdk/nvme_intel.h 00:05:12.894 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:12.894 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:12.894 TEST_HEADER include/spdk/nvme_spec.h 00:05:12.894 TEST_HEADER include/spdk/nvme_zns.h 00:05:12.894 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:12.894 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:12.894 TEST_HEADER include/spdk/nvmf.h 00:05:12.894 TEST_HEADER include/spdk/nvmf_spec.h 00:05:12.894 TEST_HEADER include/spdk/nvmf_transport.h 00:05:12.894 TEST_HEADER include/spdk/opal.h 00:05:12.894 CC test/env/mem_callbacks/mem_callbacks.o 00:05:12.894 TEST_HEADER include/spdk/opal_spec.h 00:05:12.894 TEST_HEADER include/spdk/pci_ids.h 00:05:12.894 TEST_HEADER include/spdk/pipe.h 00:05:12.894 TEST_HEADER include/spdk/queue.h 00:05:12.894 TEST_HEADER include/spdk/reduce.h 00:05:12.894 TEST_HEADER include/spdk/rpc.h 00:05:12.894 TEST_HEADER include/spdk/scheduler.h 00:05:12.894 TEST_HEADER include/spdk/scsi.h 00:05:12.894 LINK rpc_client_test 00:05:12.894 TEST_HEADER include/spdk/scsi_spec.h 00:05:12.894 TEST_HEADER include/spdk/sock.h 00:05:12.894 TEST_HEADER include/spdk/stdinc.h 00:05:12.894 TEST_HEADER include/spdk/string.h 00:05:12.894 TEST_HEADER include/spdk/thread.h 00:05:12.894 TEST_HEADER include/spdk/trace.h 00:05:12.894 TEST_HEADER include/spdk/trace_parser.h 00:05:12.894 TEST_HEADER include/spdk/tree.h 00:05:12.894 TEST_HEADER include/spdk/ublk.h 00:05:12.894 TEST_HEADER include/spdk/util.h 00:05:12.894 LINK poller_perf 00:05:12.894 TEST_HEADER include/spdk/uuid.h 00:05:12.894 TEST_HEADER include/spdk/version.h 00:05:12.894 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:12.894 LINK interrupt_tgt 00:05:12.894 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:12.894 TEST_HEADER include/spdk/vhost.h 00:05:12.894 TEST_HEADER include/spdk/vmd.h 00:05:12.894 TEST_HEADER include/spdk/xor.h 00:05:12.894 TEST_HEADER include/spdk/zipf.h 00:05:12.894 CXX test/cpp_headers/accel.o 00:05:12.894 LINK zipf 00:05:13.152 LINK bdev_svc 00:05:13.152 LINK ioat_perf 00:05:13.152 CC examples/ioat/verify/verify.o 00:05:13.152 CXX test/cpp_headers/accel_module.o 00:05:13.152 LINK spdk_trace 00:05:13.152 CC app/trace_record/trace_record.o 00:05:13.153 CC app/nvmf_tgt/nvmf_main.o 00:05:13.153 CC app/iscsi_tgt/iscsi_tgt.o 00:05:13.412 CC app/spdk_tgt/spdk_tgt.o 00:05:13.412 CXX test/cpp_headers/assert.o 00:05:13.412 LINK verify 00:05:13.412 LINK test_dma 00:05:13.412 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:13.412 CC test/app/histogram_perf/histogram_perf.o 00:05:13.412 LINK nvmf_tgt 00:05:13.412 LINK spdk_trace_record 00:05:13.412 LINK mem_callbacks 00:05:13.412 LINK iscsi_tgt 00:05:13.671 CXX test/cpp_headers/barrier.o 00:05:13.671 LINK histogram_perf 00:05:13.671 LINK spdk_tgt 00:05:13.671 CC test/env/vtophys/vtophys.o 00:05:13.671 CXX test/cpp_headers/base64.o 00:05:13.671 CC examples/thread/thread/thread_ex.o 00:05:13.671 CC examples/sock/hello_world/hello_sock.o 00:05:13.930 CC examples/vmd/lsvmd/lsvmd.o 00:05:13.930 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:13.930 CC test/app/jsoncat/jsoncat.o 00:05:13.930 CC examples/idxd/perf/perf.o 00:05:13.930 CC app/spdk_lspci/spdk_lspci.o 00:05:13.930 LINK vtophys 00:05:13.930 LINK nvme_fuzz 00:05:13.930 CXX test/cpp_headers/bdev.o 00:05:13.930 LINK lsvmd 00:05:13.930 LINK jsoncat 00:05:13.930 LINK thread 00:05:14.188 LINK spdk_lspci 00:05:14.188 CXX test/cpp_headers/bdev_module.o 00:05:14.188 LINK hello_sock 00:05:14.188 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:14.189 CXX test/cpp_headers/bdev_zone.o 00:05:14.189 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:14.189 CC examples/vmd/led/led.o 00:05:14.189 LINK idxd_perf 00:05:14.189 CC app/spdk_nvme_perf/perf.o 00:05:14.447 CC app/spdk_nvme_identify/identify.o 00:05:14.447 LINK env_dpdk_post_init 00:05:14.447 CXX test/cpp_headers/bit_array.o 00:05:14.447 CC app/spdk_nvme_discover/discovery_aer.o 00:05:14.447 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:14.447 CC app/spdk_top/spdk_top.o 00:05:14.447 LINK led 00:05:14.706 CXX test/cpp_headers/bit_pool.o 00:05:14.706 CC app/vhost/vhost.o 00:05:14.706 LINK spdk_nvme_discover 00:05:14.706 CC test/env/memory/memory_ut.o 00:05:14.706 CC examples/accel/perf/accel_perf.o 00:05:14.706 CXX test/cpp_headers/blob_bdev.o 00:05:14.706 LINK vhost 00:05:14.965 LINK vhost_fuzz 00:05:14.965 CC app/spdk_dd/spdk_dd.o 00:05:14.965 CXX test/cpp_headers/blobfs_bdev.o 00:05:14.965 CXX test/cpp_headers/blobfs.o 00:05:14.965 CXX test/cpp_headers/blob.o 00:05:15.224 CXX test/cpp_headers/conf.o 00:05:15.224 CC test/app/stub/stub.o 00:05:15.483 LINK spdk_nvme_perf 00:05:15.483 LINK spdk_dd 00:05:15.483 LINK spdk_nvme_identify 00:05:15.483 LINK accel_perf 00:05:15.483 CC app/fio/nvme/fio_plugin.o 00:05:15.483 CXX test/cpp_headers/config.o 00:05:15.483 CXX test/cpp_headers/cpuset.o 00:05:15.483 LINK stub 00:05:15.483 CXX test/cpp_headers/crc16.o 00:05:15.483 LINK spdk_top 00:05:15.483 CXX test/cpp_headers/crc32.o 00:05:15.741 CXX test/cpp_headers/crc64.o 00:05:15.741 CXX test/cpp_headers/dif.o 00:05:15.741 CC test/event/event_perf/event_perf.o 00:05:15.741 CC examples/blob/hello_world/hello_blob.o 00:05:15.741 CC examples/nvme/hello_world/hello_world.o 00:05:15.741 CC examples/nvme/reconnect/reconnect.o 00:05:15.998 CXX test/cpp_headers/dma.o 00:05:15.998 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:15.998 LINK event_perf 00:05:15.998 LINK iscsi_fuzz 00:05:15.998 LINK memory_ut 00:05:15.998 CC examples/bdev/hello_world/hello_bdev.o 00:05:15.998 CXX test/cpp_headers/endian.o 00:05:15.998 LINK hello_blob 00:05:16.256 LINK hello_world 00:05:16.256 LINK spdk_nvme 00:05:16.256 CC test/event/reactor/reactor.o 00:05:16.256 LINK hello_fsdev 00:05:16.256 CXX test/cpp_headers/env_dpdk.o 00:05:16.256 LINK reconnect 00:05:16.256 CXX test/cpp_headers/env.o 00:05:16.256 CXX test/cpp_headers/event.o 00:05:16.256 CC test/env/pci/pci_ut.o 00:05:16.256 LINK reactor 00:05:16.256 LINK hello_bdev 00:05:16.256 CC app/fio/bdev/fio_plugin.o 00:05:16.513 CC examples/blob/cli/blobcli.o 00:05:16.513 CXX test/cpp_headers/fd_group.o 00:05:16.513 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:16.513 CC test/event/reactor_perf/reactor_perf.o 00:05:16.513 CC test/event/app_repeat/app_repeat.o 00:05:16.773 CC test/nvme/aer/aer.o 00:05:16.773 CC test/event/scheduler/scheduler.o 00:05:16.773 CXX test/cpp_headers/fd.o 00:05:16.773 CC examples/bdev/bdevperf/bdevperf.o 00:05:16.773 LINK reactor_perf 00:05:16.773 LINK app_repeat 00:05:16.773 LINK pci_ut 00:05:16.773 CXX test/cpp_headers/file.o 00:05:16.773 CXX test/cpp_headers/fsdev.o 00:05:17.032 LINK scheduler 00:05:17.032 LINK spdk_bdev 00:05:17.032 LINK aer 00:05:17.032 CC test/nvme/reset/reset.o 00:05:17.032 LINK blobcli 00:05:17.032 CXX test/cpp_headers/fsdev_module.o 00:05:17.291 CXX test/cpp_headers/ftl.o 00:05:17.291 CC test/nvme/sgl/sgl.o 00:05:17.291 LINK nvme_manage 00:05:17.291 CC examples/nvme/hotplug/hotplug.o 00:05:17.291 CC examples/nvme/arbitration/arbitration.o 00:05:17.291 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:17.291 LINK reset 00:05:17.291 CC examples/nvme/abort/abort.o 00:05:17.551 CXX test/cpp_headers/fuse_dispatcher.o 00:05:17.551 LINK cmb_copy 00:05:17.551 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:17.551 LINK hotplug 00:05:17.551 LINK sgl 00:05:17.551 CXX test/cpp_headers/gpt_spec.o 00:05:17.810 CC test/accel/dif/dif.o 00:05:17.810 CXX test/cpp_headers/hexlify.o 00:05:17.810 LINK arbitration 00:05:17.810 LINK pmr_persistence 00:05:17.810 LINK bdevperf 00:05:17.810 CC test/nvme/e2edp/nvme_dp.o 00:05:17.810 CC test/blobfs/mkfs/mkfs.o 00:05:17.810 LINK abort 00:05:17.810 CXX test/cpp_headers/histogram_data.o 00:05:17.810 CC test/lvol/esnap/esnap.o 00:05:17.810 CXX test/cpp_headers/idxd.o 00:05:18.070 CXX test/cpp_headers/idxd_spec.o 00:05:18.070 CC test/nvme/overhead/overhead.o 00:05:18.070 CXX test/cpp_headers/init.o 00:05:18.070 LINK mkfs 00:05:18.070 CXX test/cpp_headers/ioat.o 00:05:18.070 CXX test/cpp_headers/ioat_spec.o 00:05:18.070 LINK nvme_dp 00:05:18.070 CXX test/cpp_headers/iscsi_spec.o 00:05:18.070 CC test/nvme/err_injection/err_injection.o 00:05:18.337 CC examples/nvmf/nvmf/nvmf.o 00:05:18.337 CXX test/cpp_headers/json.o 00:05:18.337 LINK overhead 00:05:18.337 CXX test/cpp_headers/jsonrpc.o 00:05:18.337 LINK err_injection 00:05:18.337 CC test/nvme/startup/startup.o 00:05:18.337 CC test/nvme/reserve/reserve.o 00:05:18.337 CC test/nvme/simple_copy/simple_copy.o 00:05:18.337 CXX test/cpp_headers/keyring.o 00:05:18.605 CC test/nvme/connect_stress/connect_stress.o 00:05:18.605 LINK dif 00:05:18.605 LINK nvmf 00:05:18.605 LINK startup 00:05:18.605 CXX test/cpp_headers/keyring_module.o 00:05:18.605 CC test/nvme/boot_partition/boot_partition.o 00:05:18.605 CC test/nvme/compliance/nvme_compliance.o 00:05:18.605 LINK reserve 00:05:18.605 LINK simple_copy 00:05:18.864 LINK connect_stress 00:05:18.864 CXX test/cpp_headers/likely.o 00:05:18.864 CXX test/cpp_headers/log.o 00:05:18.864 LINK boot_partition 00:05:18.864 CC test/nvme/fused_ordering/fused_ordering.o 00:05:18.864 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:18.864 CC test/nvme/fdp/fdp.o 00:05:19.123 CXX test/cpp_headers/lvol.o 00:05:19.123 CXX test/cpp_headers/md5.o 00:05:19.123 CXX test/cpp_headers/memory.o 00:05:19.123 CC test/nvme/cuse/cuse.o 00:05:19.123 LINK nvme_compliance 00:05:19.123 CC test/bdev/bdevio/bdevio.o 00:05:19.123 LINK fused_ordering 00:05:19.123 LINK doorbell_aers 00:05:19.123 CXX test/cpp_headers/mmio.o 00:05:19.123 CXX test/cpp_headers/nbd.o 00:05:19.123 CXX test/cpp_headers/net.o 00:05:19.383 CXX test/cpp_headers/notify.o 00:05:19.383 CXX test/cpp_headers/nvme.o 00:05:19.383 CXX test/cpp_headers/nvme_intel.o 00:05:19.383 CXX test/cpp_headers/nvme_ocssd.o 00:05:19.383 LINK fdp 00:05:19.383 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:19.383 CXX test/cpp_headers/nvme_spec.o 00:05:19.383 CXX test/cpp_headers/nvme_zns.o 00:05:19.383 CXX test/cpp_headers/nvmf_cmd.o 00:05:19.383 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:19.642 CXX test/cpp_headers/nvmf.o 00:05:19.642 CXX test/cpp_headers/nvmf_spec.o 00:05:19.642 CXX test/cpp_headers/nvmf_transport.o 00:05:19.642 LINK bdevio 00:05:19.642 CXX test/cpp_headers/opal.o 00:05:19.642 CXX test/cpp_headers/opal_spec.o 00:05:19.642 CXX test/cpp_headers/pci_ids.o 00:05:19.642 CXX test/cpp_headers/pipe.o 00:05:19.642 CXX test/cpp_headers/queue.o 00:05:19.642 CXX test/cpp_headers/reduce.o 00:05:19.642 CXX test/cpp_headers/rpc.o 00:05:19.901 CXX test/cpp_headers/scheduler.o 00:05:19.901 CXX test/cpp_headers/scsi.o 00:05:19.901 CXX test/cpp_headers/scsi_spec.o 00:05:19.901 CXX test/cpp_headers/sock.o 00:05:19.901 CXX test/cpp_headers/stdinc.o 00:05:19.901 CXX test/cpp_headers/string.o 00:05:19.901 CXX test/cpp_headers/thread.o 00:05:19.901 CXX test/cpp_headers/trace.o 00:05:19.901 CXX test/cpp_headers/trace_parser.o 00:05:19.901 CXX test/cpp_headers/tree.o 00:05:19.901 CXX test/cpp_headers/ublk.o 00:05:20.160 CXX test/cpp_headers/util.o 00:05:20.160 CXX test/cpp_headers/uuid.o 00:05:20.160 CXX test/cpp_headers/version.o 00:05:20.160 CXX test/cpp_headers/vfio_user_pci.o 00:05:20.160 CXX test/cpp_headers/vfio_user_spec.o 00:05:20.160 CXX test/cpp_headers/vhost.o 00:05:20.160 CXX test/cpp_headers/vmd.o 00:05:20.160 CXX test/cpp_headers/xor.o 00:05:20.160 CXX test/cpp_headers/zipf.o 00:05:20.728 LINK cuse 00:05:26.002 LINK esnap 00:05:26.002 00:05:26.002 real 1m38.138s 00:05:26.002 user 8m58.734s 00:05:26.002 sys 1m43.756s 00:05:26.002 08:38:56 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:26.002 08:38:56 make -- common/autotest_common.sh@10 -- $ set +x 00:05:26.002 ************************************ 00:05:26.002 END TEST make 00:05:26.002 ************************************ 00:05:26.002 08:38:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:26.002 08:38:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:26.002 08:38:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:26.002 08:38:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.002 08:38:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:26.002 08:38:56 -- pm/common@44 -- $ pid=5245 00:05:26.002 08:38:56 -- pm/common@50 -- $ kill -TERM 5245 00:05:26.002 08:38:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.002 08:38:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:26.002 08:38:56 -- pm/common@44 -- $ pid=5246 00:05:26.002 08:38:56 -- pm/common@50 -- $ kill -TERM 5246 00:05:26.002 08:38:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:26.002 08:38:56 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:26.002 08:38:56 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.002 08:38:56 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.002 08:38:56 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.002 08:38:56 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.002 08:38:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.002 08:38:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.002 08:38:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.002 08:38:56 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.002 08:38:56 -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.002 08:38:56 -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.002 08:38:56 -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.002 08:38:56 -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.002 08:38:56 -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.002 08:38:56 -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.002 08:38:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.002 08:38:56 -- scripts/common.sh@344 -- # case "$op" in 00:05:26.002 08:38:56 -- scripts/common.sh@345 -- # : 1 00:05:26.002 08:38:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.002 08:38:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.002 08:38:56 -- scripts/common.sh@365 -- # decimal 1 00:05:26.002 08:38:56 -- scripts/common.sh@353 -- # local d=1 00:05:26.002 08:38:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.002 08:38:56 -- scripts/common.sh@355 -- # echo 1 00:05:26.002 08:38:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.002 08:38:56 -- scripts/common.sh@366 -- # decimal 2 00:05:26.002 08:38:56 -- scripts/common.sh@353 -- # local d=2 00:05:26.002 08:38:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.002 08:38:56 -- scripts/common.sh@355 -- # echo 2 00:05:26.002 08:38:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.002 08:38:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.002 08:38:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.002 08:38:56 -- scripts/common.sh@368 -- # return 0 00:05:26.002 08:38:56 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.002 08:38:56 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.002 --rc genhtml_branch_coverage=1 00:05:26.002 --rc genhtml_function_coverage=1 00:05:26.002 --rc genhtml_legend=1 00:05:26.002 --rc geninfo_all_blocks=1 00:05:26.002 --rc geninfo_unexecuted_blocks=1 00:05:26.002 00:05:26.002 ' 00:05:26.002 08:38:56 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.002 --rc genhtml_branch_coverage=1 00:05:26.002 --rc genhtml_function_coverage=1 00:05:26.002 --rc genhtml_legend=1 00:05:26.002 --rc geninfo_all_blocks=1 00:05:26.002 --rc geninfo_unexecuted_blocks=1 00:05:26.002 00:05:26.002 ' 00:05:26.003 08:38:56 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.003 --rc genhtml_branch_coverage=1 00:05:26.003 --rc genhtml_function_coverage=1 00:05:26.003 --rc genhtml_legend=1 00:05:26.003 --rc geninfo_all_blocks=1 00:05:26.003 --rc geninfo_unexecuted_blocks=1 00:05:26.003 00:05:26.003 ' 00:05:26.003 08:38:56 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.003 --rc genhtml_branch_coverage=1 00:05:26.003 --rc genhtml_function_coverage=1 00:05:26.003 --rc genhtml_legend=1 00:05:26.003 --rc geninfo_all_blocks=1 00:05:26.003 --rc geninfo_unexecuted_blocks=1 00:05:26.003 00:05:26.003 ' 00:05:26.003 08:38:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:26.003 08:38:56 -- nvmf/common.sh@7 -- # uname -s 00:05:26.003 08:38:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.003 08:38:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.003 08:38:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.003 08:38:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.003 08:38:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.003 08:38:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.003 08:38:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.003 08:38:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.003 08:38:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.003 08:38:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.003 08:38:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36bf68f8-f61b-455b-8e26-5b6a0b1cc387 00:05:26.003 08:38:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=36bf68f8-f61b-455b-8e26-5b6a0b1cc387 00:05:26.003 08:38:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.003 08:38:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.003 08:38:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.003 08:38:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.003 08:38:56 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:26.003 08:38:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.003 08:38:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.003 08:38:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.003 08:38:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.003 08:38:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.003 08:38:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.003 08:38:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.003 08:38:56 -- paths/export.sh@5 -- # export PATH 00:05:26.003 08:38:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.003 08:38:56 -- nvmf/common.sh@51 -- # : 0 00:05:26.003 08:38:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.003 08:38:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.003 08:38:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.003 08:38:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.003 08:38:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.003 08:38:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.003 08:38:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.003 08:38:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.003 08:38:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.003 08:38:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:26.003 08:38:56 -- spdk/autotest.sh@32 -- # uname -s 00:05:26.003 08:38:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:26.003 08:38:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:26.003 08:38:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:26.003 08:38:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:26.003 08:38:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:26.003 08:38:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:26.003 08:38:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:26.003 08:38:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:26.003 08:38:56 -- spdk/autotest.sh@48 -- # udevadm_pid=54337 00:05:26.003 08:38:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:26.003 08:38:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:26.003 08:38:56 -- pm/common@17 -- # local monitor 00:05:26.003 08:38:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.003 08:38:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:26.003 08:38:56 -- pm/common@25 -- # sleep 1 00:05:26.003 08:38:56 -- pm/common@21 -- # date +%s 00:05:26.003 08:38:56 -- pm/common@21 -- # date +%s 00:05:26.003 08:38:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732091936 00:05:26.003 08:38:56 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732091936 00:05:26.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732091936_collect-vmstat.pm.log 00:05:26.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732091936_collect-cpu-load.pm.log 00:05:26.945 08:38:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:26.945 08:38:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:26.945 08:38:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.945 08:38:57 -- common/autotest_common.sh@10 -- # set +x 00:05:26.945 08:38:57 -- spdk/autotest.sh@59 -- # create_test_list 00:05:26.945 08:38:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:26.945 08:38:57 -- common/autotest_common.sh@10 -- # set +x 00:05:26.945 08:38:57 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:26.945 08:38:57 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:26.945 08:38:57 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:26.945 08:38:57 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:26.945 08:38:57 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:26.945 08:38:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:26.945 08:38:57 -- common/autotest_common.sh@1457 -- # uname 00:05:26.945 08:38:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:26.945 08:38:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:26.945 08:38:57 -- common/autotest_common.sh@1477 -- # uname 00:05:26.945 08:38:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:26.945 08:38:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:26.945 08:38:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:26.945 lcov: LCOV version 1.15 00:05:26.945 08:38:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:45.048 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:45.048 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:03.156 08:39:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:03.156 08:39:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.156 08:39:31 -- common/autotest_common.sh@10 -- # set +x 00:06:03.156 08:39:31 -- spdk/autotest.sh@78 -- # rm -f 00:06:03.156 08:39:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:03.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.156 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:03.156 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:03.156 08:39:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:03.156 08:39:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:03.156 08:39:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:03.156 08:39:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:03.156 08:39:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:03.156 08:39:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:03.156 08:39:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:03.156 08:39:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:03.156 08:39:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:03.156 08:39:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:03.156 08:39:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:03.156 08:39:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:03.156 08:39:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:03.156 08:39:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:03.156 08:39:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:03.156 08:39:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:03.156 08:39:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:03.156 08:39:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:03.156 08:39:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:03.156 08:39:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:03.156 08:39:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:03.156 08:39:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:03.157 08:39:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:03.157 08:39:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:03.157 08:39:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:03.157 08:39:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.157 08:39:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.157 08:39:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:03.157 08:39:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:03.157 08:39:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:03.157 No valid GPT data, bailing 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # pt= 00:06:03.157 08:39:32 -- scripts/common.sh@395 -- # return 1 00:06:03.157 08:39:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:03.157 1+0 records in 00:06:03.157 1+0 records out 00:06:03.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460695 s, 228 MB/s 00:06:03.157 08:39:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.157 08:39:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.157 08:39:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:03.157 08:39:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:03.157 08:39:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:03.157 No valid GPT data, bailing 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # pt= 00:06:03.157 08:39:32 -- scripts/common.sh@395 -- # return 1 00:06:03.157 08:39:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:03.157 1+0 records in 00:06:03.157 1+0 records out 00:06:03.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437202 s, 240 MB/s 00:06:03.157 08:39:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.157 08:39:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.157 08:39:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:03.157 08:39:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:03.157 08:39:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:03.157 No valid GPT data, bailing 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # pt= 00:06:03.157 08:39:32 -- scripts/common.sh@395 -- # return 1 00:06:03.157 08:39:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:03.157 1+0 records in 00:06:03.157 1+0 records out 00:06:03.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554931 s, 189 MB/s 00:06:03.157 08:39:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.157 08:39:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.157 08:39:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:03.157 08:39:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:03.157 08:39:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:03.157 No valid GPT data, bailing 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:03.157 08:39:32 -- scripts/common.sh@394 -- # pt= 00:06:03.157 08:39:32 -- scripts/common.sh@395 -- # return 1 00:06:03.157 08:39:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:03.157 1+0 records in 00:06:03.157 1+0 records out 00:06:03.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409754 s, 256 MB/s 00:06:03.157 08:39:32 -- spdk/autotest.sh@105 -- # sync 00:06:03.157 08:39:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:03.157 08:39:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:03.157 08:39:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:04.093 08:39:34 -- spdk/autotest.sh@111 -- # uname -s 00:06:04.093 08:39:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:04.093 08:39:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:04.093 08:39:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:04.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.660 Hugepages 00:06:04.660 node hugesize free / total 00:06:04.660 node0 1048576kB 0 / 0 00:06:04.660 node0 2048kB 0 / 0 00:06:04.660 00:06:04.660 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:04.660 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:04.919 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:04.919 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:04.919 08:39:35 -- spdk/autotest.sh@117 -- # uname -s 00:06:04.919 08:39:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:04.919 08:39:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:04.919 08:39:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.745 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:05.745 08:39:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:06.681 08:39:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:06.681 08:39:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:06.681 08:39:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:06.681 08:39:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:06.681 08:39:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:06.681 08:39:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:06.681 08:39:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.681 08:39:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.681 08:39:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:06.939 08:39:37 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:06.939 08:39:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:06.939 08:39:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:07.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:07.197 Waiting for block devices as requested 00:06:07.197 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:07.456 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:07.456 08:39:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.456 08:39:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:07.456 08:39:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:07.456 08:39:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:07.456 08:39:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:07.456 08:39:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:07.456 08:39:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:07.456 08:39:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:07.456 08:39:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:07.456 08:39:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:07.456 08:39:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:07.456 08:39:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.456 08:39:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.456 08:39:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:07.456 08:39:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.457 08:39:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.457 08:39:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.457 08:39:38 -- common/autotest_common.sh@1543 -- # continue 00:06:07.457 08:39:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.457 08:39:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:07.457 08:39:38 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:07.457 08:39:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:07.457 08:39:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:07.457 08:39:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.457 08:39:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.457 08:39:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:07.457 08:39:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.457 08:39:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.457 08:39:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.457 08:39:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.457 08:39:38 -- common/autotest_common.sh@1543 -- # continue 00:06:07.457 08:39:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:07.457 08:39:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.457 08:39:38 -- common/autotest_common.sh@10 -- # set +x 00:06:07.457 08:39:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:07.457 08:39:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.457 08:39:38 -- common/autotest_common.sh@10 -- # set +x 00:06:07.457 08:39:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.396 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:08.396 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:08.396 08:39:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:08.397 08:39:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.397 08:39:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.397 08:39:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:08.397 08:39:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:08.397 08:39:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:08.397 08:39:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:08.397 08:39:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:08.397 08:39:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:08.397 08:39:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:08.397 08:39:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:08.397 08:39:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:08.397 08:39:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:08.397 08:39:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:08.397 08:39:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:08.397 08:39:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:08.657 08:39:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:08.657 08:39:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.657 08:39:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.657 08:39:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:08.657 08:39:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.657 08:39:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.657 08:39:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.657 08:39:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:08.657 08:39:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.657 08:39:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.657 08:39:39 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:08.657 08:39:39 -- common/autotest_common.sh@1572 -- # return 0 00:06:08.657 08:39:39 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:08.657 08:39:39 -- common/autotest_common.sh@1580 -- # return 0 00:06:08.657 08:39:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:08.657 08:39:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:08.657 08:39:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.657 08:39:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.657 08:39:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:08.657 08:39:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.657 08:39:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.657 08:39:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:08.657 08:39:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:08.657 08:39:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.657 08:39:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.657 08:39:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.657 ************************************ 00:06:08.657 START TEST env 00:06:08.657 ************************************ 00:06:08.657 08:39:39 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:08.657 * Looking for test storage... 00:06:08.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:08.657 08:39:39 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.657 08:39:39 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.657 08:39:39 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.657 08:39:39 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.657 08:39:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.657 08:39:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.657 08:39:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.657 08:39:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.657 08:39:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.657 08:39:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.657 08:39:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.657 08:39:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.657 08:39:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.915 08:39:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.915 08:39:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.915 08:39:39 env -- scripts/common.sh@344 -- # case "$op" in 00:06:08.915 08:39:39 env -- scripts/common.sh@345 -- # : 1 00:06:08.915 08:39:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.915 08:39:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.915 08:39:39 env -- scripts/common.sh@365 -- # decimal 1 00:06:08.915 08:39:39 env -- scripts/common.sh@353 -- # local d=1 00:06:08.915 08:39:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.915 08:39:39 env -- scripts/common.sh@355 -- # echo 1 00:06:08.915 08:39:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.915 08:39:39 env -- scripts/common.sh@366 -- # decimal 2 00:06:08.915 08:39:39 env -- scripts/common.sh@353 -- # local d=2 00:06:08.915 08:39:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.915 08:39:39 env -- scripts/common.sh@355 -- # echo 2 00:06:08.915 08:39:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.915 08:39:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.915 08:39:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.916 08:39:39 env -- scripts/common.sh@368 -- # return 0 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.916 --rc genhtml_branch_coverage=1 00:06:08.916 --rc genhtml_function_coverage=1 00:06:08.916 --rc genhtml_legend=1 00:06:08.916 --rc geninfo_all_blocks=1 00:06:08.916 --rc geninfo_unexecuted_blocks=1 00:06:08.916 00:06:08.916 ' 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.916 --rc genhtml_branch_coverage=1 00:06:08.916 --rc genhtml_function_coverage=1 00:06:08.916 --rc genhtml_legend=1 00:06:08.916 --rc geninfo_all_blocks=1 00:06:08.916 --rc geninfo_unexecuted_blocks=1 00:06:08.916 00:06:08.916 ' 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.916 --rc genhtml_branch_coverage=1 00:06:08.916 --rc genhtml_function_coverage=1 00:06:08.916 --rc genhtml_legend=1 00:06:08.916 --rc geninfo_all_blocks=1 00:06:08.916 --rc geninfo_unexecuted_blocks=1 00:06:08.916 00:06:08.916 ' 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.916 --rc genhtml_branch_coverage=1 00:06:08.916 --rc genhtml_function_coverage=1 00:06:08.916 --rc genhtml_legend=1 00:06:08.916 --rc geninfo_all_blocks=1 00:06:08.916 --rc geninfo_unexecuted_blocks=1 00:06:08.916 00:06:08.916 ' 00:06:08.916 08:39:39 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.916 08:39:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.916 08:39:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.916 ************************************ 00:06:08.916 START TEST env_memory 00:06:08.916 ************************************ 00:06:08.916 08:39:39 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:08.916 00:06:08.916 00:06:08.916 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.916 http://cunit.sourceforge.net/ 00:06:08.916 00:06:08.916 00:06:08.916 Suite: memory 00:06:08.916 Test: alloc and free memory map ...[2024-11-20 08:39:39.674557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:08.916 passed 00:06:08.916 Test: mem map translation ...[2024-11-20 08:39:39.736059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:08.916 [2024-11-20 08:39:39.736181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:08.916 [2024-11-20 08:39:39.736324] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:08.916 [2024-11-20 08:39:39.736379] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:09.174 passed 00:06:09.175 Test: mem map registration ...[2024-11-20 08:39:39.849522] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:09.175 [2024-11-20 08:39:39.849629] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:09.175 passed 00:06:09.175 Test: mem map adjacent registrations ...passed 00:06:09.175 00:06:09.175 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.175 suites 1 1 n/a 0 0 00:06:09.175 tests 4 4 4 0 0 00:06:09.175 asserts 152 152 152 0 n/a 00:06:09.175 00:06:09.175 Elapsed time = 0.360 seconds 00:06:09.175 00:06:09.175 real 0m0.403s 00:06:09.175 user 0m0.365s 00:06:09.175 sys 0m0.031s 00:06:09.175 08:39:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.175 08:39:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:09.175 ************************************ 00:06:09.175 END TEST env_memory 00:06:09.175 ************************************ 00:06:09.175 08:39:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:09.175 08:39:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.175 08:39:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.175 08:39:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.175 ************************************ 00:06:09.175 START TEST env_vtophys 00:06:09.175 ************************************ 00:06:09.175 08:39:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:09.434 EAL: lib.eal log level changed from notice to debug 00:06:09.434 EAL: Detected lcore 0 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 1 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 2 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 3 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 4 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 5 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 6 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 7 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 8 as core 0 on socket 0 00:06:09.434 EAL: Detected lcore 9 as core 0 on socket 0 00:06:09.434 EAL: Maximum logical cores by configuration: 128 00:06:09.434 EAL: Detected CPU lcores: 10 00:06:09.434 EAL: Detected NUMA nodes: 1 00:06:09.434 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:09.434 EAL: Detected shared linkage of DPDK 00:06:09.434 EAL: No shared files mode enabled, IPC will be disabled 00:06:09.434 EAL: Selected IOVA mode 'PA' 00:06:09.434 EAL: Probing VFIO support... 00:06:09.434 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:09.434 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:09.434 EAL: Ask a virtual area of 0x2e000 bytes 00:06:09.434 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:09.434 EAL: Setting up physically contiguous memory... 00:06:09.434 EAL: Setting maximum number of open files to 524288 00:06:09.434 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:09.434 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:09.434 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.434 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:09.434 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.434 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.434 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:09.434 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:09.434 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.434 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:09.434 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.434 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.434 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:09.434 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:09.434 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.434 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:09.434 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.434 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.434 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:09.434 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:09.434 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.434 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:09.434 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.434 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.434 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:09.434 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:09.434 EAL: Hugepages will be freed exactly as allocated. 00:06:09.434 EAL: No shared files mode enabled, IPC is disabled 00:06:09.434 EAL: No shared files mode enabled, IPC is disabled 00:06:09.434 EAL: TSC frequency is ~2200000 KHz 00:06:09.434 EAL: Main lcore 0 is ready (tid=7fa343502a40;cpuset=[0]) 00:06:09.434 EAL: Trying to obtain current memory policy. 00:06:09.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.434 EAL: Restoring previous memory policy: 0 00:06:09.434 EAL: request: mp_malloc_sync 00:06:09.434 EAL: No shared files mode enabled, IPC is disabled 00:06:09.434 EAL: Heap on socket 0 was expanded by 2MB 00:06:09.434 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:09.434 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:09.434 EAL: Mem event callback 'spdk:(nil)' registered 00:06:09.434 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:09.434 00:06:09.434 00:06:09.434 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.434 http://cunit.sourceforge.net/ 00:06:09.434 00:06:09.434 00:06:09.434 Suite: components_suite 00:06:10.001 Test: vtophys_malloc_test ...passed 00:06:10.001 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.001 EAL: Restoring previous memory policy: 4 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was expanded by 4MB 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was shrunk by 4MB 00:06:10.001 EAL: Trying to obtain current memory policy. 00:06:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.001 EAL: Restoring previous memory policy: 4 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was expanded by 6MB 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was shrunk by 6MB 00:06:10.001 EAL: Trying to obtain current memory policy. 00:06:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.001 EAL: Restoring previous memory policy: 4 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was expanded by 10MB 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was shrunk by 10MB 00:06:10.001 EAL: Trying to obtain current memory policy. 00:06:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.001 EAL: Restoring previous memory policy: 4 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was expanded by 18MB 00:06:10.001 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.001 EAL: request: mp_malloc_sync 00:06:10.001 EAL: No shared files mode enabled, IPC is disabled 00:06:10.001 EAL: Heap on socket 0 was shrunk by 18MB 00:06:10.001 EAL: Trying to obtain current memory policy. 00:06:10.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.001 EAL: Restoring previous memory policy: 4 00:06:10.002 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.002 EAL: request: mp_malloc_sync 00:06:10.002 EAL: No shared files mode enabled, IPC is disabled 00:06:10.002 EAL: Heap on socket 0 was expanded by 34MB 00:06:10.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.260 EAL: request: mp_malloc_sync 00:06:10.260 EAL: No shared files mode enabled, IPC is disabled 00:06:10.260 EAL: Heap on socket 0 was shrunk by 34MB 00:06:10.260 EAL: Trying to obtain current memory policy. 00:06:10.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.260 EAL: Restoring previous memory policy: 4 00:06:10.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.260 EAL: request: mp_malloc_sync 00:06:10.260 EAL: No shared files mode enabled, IPC is disabled 00:06:10.260 EAL: Heap on socket 0 was expanded by 66MB 00:06:10.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.260 EAL: request: mp_malloc_sync 00:06:10.260 EAL: No shared files mode enabled, IPC is disabled 00:06:10.260 EAL: Heap on socket 0 was shrunk by 66MB 00:06:10.549 EAL: Trying to obtain current memory policy. 00:06:10.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.549 EAL: Restoring previous memory policy: 4 00:06:10.549 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.549 EAL: request: mp_malloc_sync 00:06:10.549 EAL: No shared files mode enabled, IPC is disabled 00:06:10.549 EAL: Heap on socket 0 was expanded by 130MB 00:06:10.808 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.808 EAL: request: mp_malloc_sync 00:06:10.808 EAL: No shared files mode enabled, IPC is disabled 00:06:10.808 EAL: Heap on socket 0 was shrunk by 130MB 00:06:10.808 EAL: Trying to obtain current memory policy. 00:06:10.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.067 EAL: Restoring previous memory policy: 4 00:06:11.067 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.067 EAL: request: mp_malloc_sync 00:06:11.067 EAL: No shared files mode enabled, IPC is disabled 00:06:11.067 EAL: Heap on socket 0 was expanded by 258MB 00:06:11.634 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.634 EAL: request: mp_malloc_sync 00:06:11.634 EAL: No shared files mode enabled, IPC is disabled 00:06:11.634 EAL: Heap on socket 0 was shrunk by 258MB 00:06:11.893 EAL: Trying to obtain current memory policy. 00:06:11.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.153 EAL: Restoring previous memory policy: 4 00:06:12.153 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.153 EAL: request: mp_malloc_sync 00:06:12.153 EAL: No shared files mode enabled, IPC is disabled 00:06:12.153 EAL: Heap on socket 0 was expanded by 514MB 00:06:13.089 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.347 EAL: request: mp_malloc_sync 00:06:13.347 EAL: No shared files mode enabled, IPC is disabled 00:06:13.347 EAL: Heap on socket 0 was shrunk by 514MB 00:06:14.285 EAL: Trying to obtain current memory policy. 00:06:14.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.285 EAL: Restoring previous memory policy: 4 00:06:14.285 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.285 EAL: request: mp_malloc_sync 00:06:14.285 EAL: No shared files mode enabled, IPC is disabled 00:06:14.285 EAL: Heap on socket 0 was expanded by 1026MB 00:06:16.188 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.447 EAL: request: mp_malloc_sync 00:06:16.447 EAL: No shared files mode enabled, IPC is disabled 00:06:16.447 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:18.351 passed 00:06:18.351 00:06:18.351 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.351 suites 1 1 n/a 0 0 00:06:18.351 tests 2 2 2 0 0 00:06:18.351 asserts 5789 5789 5789 0 n/a 00:06:18.351 00:06:18.351 Elapsed time = 8.396 seconds 00:06:18.351 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.351 EAL: request: mp_malloc_sync 00:06:18.351 EAL: No shared files mode enabled, IPC is disabled 00:06:18.351 EAL: Heap on socket 0 was shrunk by 2MB 00:06:18.351 EAL: No shared files mode enabled, IPC is disabled 00:06:18.351 EAL: No shared files mode enabled, IPC is disabled 00:06:18.351 EAL: No shared files mode enabled, IPC is disabled 00:06:18.351 00:06:18.351 real 0m8.760s 00:06:18.351 user 0m7.405s 00:06:18.351 sys 0m1.167s 00:06:18.351 08:39:48 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.351 ************************************ 00:06:18.351 END TEST env_vtophys 00:06:18.351 ************************************ 00:06:18.351 08:39:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:18.351 08:39:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:18.351 08:39:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.351 08:39:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.352 08:39:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.352 ************************************ 00:06:18.352 START TEST env_pci 00:06:18.352 ************************************ 00:06:18.352 08:39:48 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:18.352 00:06:18.352 00:06:18.352 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.352 http://cunit.sourceforge.net/ 00:06:18.352 00:06:18.352 00:06:18.352 Suite: pci 00:06:18.352 Test: pci_hook ...[2024-11-20 08:39:48.912258] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56681 has claimed it 00:06:18.352 passed 00:06:18.352 00:06:18.352 EAL: Cannot find device (10000:00:01.0) 00:06:18.352 EAL: Failed to attach device on primary process 00:06:18.352 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.352 suites 1 1 n/a 0 0 00:06:18.352 tests 1 1 1 0 0 00:06:18.352 asserts 25 25 25 0 n/a 00:06:18.352 00:06:18.352 Elapsed time = 0.010 seconds 00:06:18.352 00:06:18.352 real 0m0.092s 00:06:18.352 user 0m0.038s 00:06:18.352 sys 0m0.054s 00:06:18.352 08:39:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.352 ************************************ 00:06:18.352 END TEST env_pci 00:06:18.352 ************************************ 00:06:18.352 08:39:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:18.352 08:39:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:18.352 08:39:49 env -- env/env.sh@15 -- # uname 00:06:18.352 08:39:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:18.352 08:39:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:18.352 08:39:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.352 08:39:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:18.352 08:39:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.352 08:39:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.352 ************************************ 00:06:18.352 START TEST env_dpdk_post_init 00:06:18.352 ************************************ 00:06:18.352 08:39:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.352 EAL: Detected CPU lcores: 10 00:06:18.352 EAL: Detected NUMA nodes: 1 00:06:18.352 EAL: Detected shared linkage of DPDK 00:06:18.352 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.352 EAL: Selected IOVA mode 'PA' 00:06:18.352 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.610 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:18.610 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:18.610 Starting DPDK initialization... 00:06:18.610 Starting SPDK post initialization... 00:06:18.610 SPDK NVMe probe 00:06:18.610 Attaching to 0000:00:10.0 00:06:18.610 Attaching to 0000:00:11.0 00:06:18.610 Attached to 0000:00:10.0 00:06:18.610 Attached to 0000:00:11.0 00:06:18.611 Cleaning up... 00:06:18.611 00:06:18.611 real 0m0.311s 00:06:18.611 user 0m0.118s 00:06:18.611 sys 0m0.094s 00:06:18.611 08:39:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.611 08:39:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.611 ************************************ 00:06:18.611 END TEST env_dpdk_post_init 00:06:18.611 ************************************ 00:06:18.611 08:39:49 env -- env/env.sh@26 -- # uname 00:06:18.611 08:39:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:18.611 08:39:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.611 08:39:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.611 08:39:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.611 08:39:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.611 ************************************ 00:06:18.611 START TEST env_mem_callbacks 00:06:18.611 ************************************ 00:06:18.611 08:39:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.611 EAL: Detected CPU lcores: 10 00:06:18.611 EAL: Detected NUMA nodes: 1 00:06:18.611 EAL: Detected shared linkage of DPDK 00:06:18.611 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.611 EAL: Selected IOVA mode 'PA' 00:06:18.870 00:06:18.870 00:06:18.870 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.870 http://cunit.sourceforge.net/ 00:06:18.870 00:06:18.870 00:06:18.870 Suite: memory 00:06:18.870 Test: test ... 00:06:18.870 register 0x200000200000 2097152 00:06:18.870 malloc 3145728 00:06:18.870 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.870 register 0x200000400000 4194304 00:06:18.870 buf 0x2000004fffc0 len 3145728 PASSED 00:06:18.870 malloc 64 00:06:18.870 buf 0x2000004ffec0 len 64 PASSED 00:06:18.870 malloc 4194304 00:06:18.870 register 0x200000800000 6291456 00:06:18.870 buf 0x2000009fffc0 len 4194304 PASSED 00:06:18.870 free 0x2000004fffc0 3145728 00:06:18.870 free 0x2000004ffec0 64 00:06:18.870 unregister 0x200000400000 4194304 PASSED 00:06:18.870 free 0x2000009fffc0 4194304 00:06:18.870 unregister 0x200000800000 6291456 PASSED 00:06:18.870 malloc 8388608 00:06:18.870 register 0x200000400000 10485760 00:06:18.870 buf 0x2000005fffc0 len 8388608 PASSED 00:06:18.870 free 0x2000005fffc0 8388608 00:06:18.870 unregister 0x200000400000 10485760 PASSED 00:06:18.870 passed 00:06:18.870 00:06:18.870 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.870 suites 1 1 n/a 0 0 00:06:18.870 tests 1 1 1 0 0 00:06:18.870 asserts 15 15 15 0 n/a 00:06:18.870 00:06:18.870 Elapsed time = 0.088 seconds 00:06:18.870 00:06:18.870 real 0m0.311s 00:06:18.870 user 0m0.121s 00:06:18.870 sys 0m0.087s 00:06:18.870 ************************************ 00:06:18.870 END TEST env_mem_callbacks 00:06:18.870 ************************************ 00:06:18.870 08:39:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.870 08:39:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:18.870 00:06:18.870 real 0m10.356s 00:06:18.870 user 0m8.265s 00:06:18.870 sys 0m1.684s 00:06:18.870 08:39:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.870 08:39:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.870 ************************************ 00:06:18.870 END TEST env 00:06:18.870 ************************************ 00:06:19.129 08:39:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:19.129 08:39:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.129 08:39:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.129 08:39:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.129 ************************************ 00:06:19.129 START TEST rpc 00:06:19.129 ************************************ 00:06:19.129 08:39:49 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:19.129 * Looking for test storage... 00:06:19.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.129 08:39:49 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.129 08:39:49 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.129 08:39:49 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.129 08:39:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.129 08:39:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.129 08:39:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.129 08:39:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.129 08:39:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.129 08:39:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.129 08:39:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.129 08:39:50 rpc -- scripts/common.sh@345 -- # : 1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.129 08:39:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.129 08:39:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.129 08:39:50 rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.129 08:39:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.129 08:39:50 rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.129 08:39:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.129 08:39:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.129 08:39:50 rpc -- scripts/common.sh@368 -- # return 0 00:06:19.129 08:39:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.129 08:39:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 08:39:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 08:39:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.129 --rc geninfo_all_blocks=1 00:06:19.129 --rc geninfo_unexecuted_blocks=1 00:06:19.129 00:06:19.129 ' 00:06:19.129 08:39:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.129 --rc genhtml_branch_coverage=1 00:06:19.129 --rc genhtml_function_coverage=1 00:06:19.129 --rc genhtml_legend=1 00:06:19.130 --rc geninfo_all_blocks=1 00:06:19.130 --rc geninfo_unexecuted_blocks=1 00:06:19.130 00:06:19.130 ' 00:06:19.130 08:39:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56814 00:06:19.130 08:39:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.130 08:39:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:19.130 08:39:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56814 00:06:19.130 08:39:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 56814 ']' 00:06:19.130 08:39:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.130 08:39:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.130 08:39:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.130 08:39:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.130 08:39:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.389 [2024-11-20 08:39:50.164295] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:19.389 [2024-11-20 08:39:50.164504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56814 ] 00:06:19.649 [2024-11-20 08:39:50.359808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.649 [2024-11-20 08:39:50.522050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:19.649 [2024-11-20 08:39:50.522178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56814' to capture a snapshot of events at runtime. 00:06:19.649 [2024-11-20 08:39:50.522204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.649 [2024-11-20 08:39:50.522225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.649 [2024-11-20 08:39:50.522240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56814 for offline analysis/debug. 00:06:19.649 [2024-11-20 08:39:50.523851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.585 08:39:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.585 08:39:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.585 08:39:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.585 08:39:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.585 08:39:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:20.585 08:39:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:20.585 08:39:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.585 08:39:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.585 08:39:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.585 ************************************ 00:06:20.585 START TEST rpc_integrity 00:06:20.585 ************************************ 00:06:20.585 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:20.585 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.585 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.585 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.585 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.585 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.585 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:20.585 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.585 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.585 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.585 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.844 { 00:06:20.844 "name": "Malloc0", 00:06:20.844 "aliases": [ 00:06:20.844 "5ce4f052-7bcc-4b3b-84d8-579002b694e0" 00:06:20.844 ], 00:06:20.844 "product_name": "Malloc disk", 00:06:20.844 "block_size": 512, 00:06:20.844 "num_blocks": 16384, 00:06:20.844 "uuid": "5ce4f052-7bcc-4b3b-84d8-579002b694e0", 00:06:20.844 "assigned_rate_limits": { 00:06:20.844 "rw_ios_per_sec": 0, 00:06:20.844 "rw_mbytes_per_sec": 0, 00:06:20.844 "r_mbytes_per_sec": 0, 00:06:20.844 "w_mbytes_per_sec": 0 00:06:20.844 }, 00:06:20.844 "claimed": false, 00:06:20.844 "zoned": false, 00:06:20.844 "supported_io_types": { 00:06:20.844 "read": true, 00:06:20.844 "write": true, 00:06:20.844 "unmap": true, 00:06:20.844 "flush": true, 00:06:20.844 "reset": true, 00:06:20.844 "nvme_admin": false, 00:06:20.844 "nvme_io": false, 00:06:20.844 "nvme_io_md": false, 00:06:20.844 "write_zeroes": true, 00:06:20.844 "zcopy": true, 00:06:20.844 "get_zone_info": false, 00:06:20.844 "zone_management": false, 00:06:20.844 "zone_append": false, 00:06:20.844 "compare": false, 00:06:20.844 "compare_and_write": false, 00:06:20.844 "abort": true, 00:06:20.844 "seek_hole": false, 00:06:20.844 "seek_data": false, 00:06:20.844 "copy": true, 00:06:20.844 "nvme_iov_md": false 00:06:20.844 }, 00:06:20.844 "memory_domains": [ 00:06:20.844 { 00:06:20.844 "dma_device_id": "system", 00:06:20.844 "dma_device_type": 1 00:06:20.844 }, 00:06:20.844 { 00:06:20.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.844 "dma_device_type": 2 00:06:20.844 } 00:06:20.844 ], 00:06:20.844 "driver_specific": {} 00:06:20.844 } 00:06:20.844 ]' 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.844 [2024-11-20 08:39:51.592405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:20.844 [2024-11-20 08:39:51.592485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.844 [2024-11-20 08:39:51.592520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:20.844 [2024-11-20 08:39:51.592558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.844 [2024-11-20 08:39:51.595543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.844 [2024-11-20 08:39:51.595600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.844 Passthru0 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.844 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.844 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.844 { 00:06:20.844 "name": "Malloc0", 00:06:20.845 "aliases": [ 00:06:20.845 "5ce4f052-7bcc-4b3b-84d8-579002b694e0" 00:06:20.845 ], 00:06:20.845 "product_name": "Malloc disk", 00:06:20.845 "block_size": 512, 00:06:20.845 "num_blocks": 16384, 00:06:20.845 "uuid": "5ce4f052-7bcc-4b3b-84d8-579002b694e0", 00:06:20.845 "assigned_rate_limits": { 00:06:20.845 "rw_ios_per_sec": 0, 00:06:20.845 "rw_mbytes_per_sec": 0, 00:06:20.845 "r_mbytes_per_sec": 0, 00:06:20.845 "w_mbytes_per_sec": 0 00:06:20.845 }, 00:06:20.845 "claimed": true, 00:06:20.845 "claim_type": "exclusive_write", 00:06:20.845 "zoned": false, 00:06:20.845 "supported_io_types": { 00:06:20.845 "read": true, 00:06:20.845 "write": true, 00:06:20.845 "unmap": true, 00:06:20.845 "flush": true, 00:06:20.845 "reset": true, 00:06:20.845 "nvme_admin": false, 00:06:20.845 "nvme_io": false, 00:06:20.845 "nvme_io_md": false, 00:06:20.845 "write_zeroes": true, 00:06:20.845 "zcopy": true, 00:06:20.845 "get_zone_info": false, 00:06:20.845 "zone_management": false, 00:06:20.845 "zone_append": false, 00:06:20.845 "compare": false, 00:06:20.845 "compare_and_write": false, 00:06:20.845 "abort": true, 00:06:20.845 "seek_hole": false, 00:06:20.845 "seek_data": false, 00:06:20.845 "copy": true, 00:06:20.845 "nvme_iov_md": false 00:06:20.845 }, 00:06:20.845 "memory_domains": [ 00:06:20.845 { 00:06:20.845 "dma_device_id": "system", 00:06:20.845 "dma_device_type": 1 00:06:20.845 }, 00:06:20.845 { 00:06:20.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.845 "dma_device_type": 2 00:06:20.845 } 00:06:20.845 ], 00:06:20.845 "driver_specific": {} 00:06:20.845 }, 00:06:20.845 { 00:06:20.845 "name": "Passthru0", 00:06:20.845 "aliases": [ 00:06:20.845 "a7883152-323c-5342-8fec-8fdc74deb2b4" 00:06:20.845 ], 00:06:20.845 "product_name": "passthru", 00:06:20.845 "block_size": 512, 00:06:20.845 "num_blocks": 16384, 00:06:20.845 "uuid": "a7883152-323c-5342-8fec-8fdc74deb2b4", 00:06:20.845 "assigned_rate_limits": { 00:06:20.845 "rw_ios_per_sec": 0, 00:06:20.845 "rw_mbytes_per_sec": 0, 00:06:20.845 "r_mbytes_per_sec": 0, 00:06:20.845 "w_mbytes_per_sec": 0 00:06:20.845 }, 00:06:20.845 "claimed": false, 00:06:20.845 "zoned": false, 00:06:20.845 "supported_io_types": { 00:06:20.845 "read": true, 00:06:20.845 "write": true, 00:06:20.845 "unmap": true, 00:06:20.845 "flush": true, 00:06:20.845 "reset": true, 00:06:20.845 "nvme_admin": false, 00:06:20.845 "nvme_io": false, 00:06:20.845 "nvme_io_md": false, 00:06:20.845 "write_zeroes": true, 00:06:20.845 "zcopy": true, 00:06:20.845 "get_zone_info": false, 00:06:20.845 "zone_management": false, 00:06:20.845 "zone_append": false, 00:06:20.845 "compare": false, 00:06:20.845 "compare_and_write": false, 00:06:20.845 "abort": true, 00:06:20.845 "seek_hole": false, 00:06:20.845 "seek_data": false, 00:06:20.845 "copy": true, 00:06:20.845 "nvme_iov_md": false 00:06:20.845 }, 00:06:20.845 "memory_domains": [ 00:06:20.845 { 00:06:20.845 "dma_device_id": "system", 00:06:20.845 "dma_device_type": 1 00:06:20.845 }, 00:06:20.845 { 00:06:20.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.845 "dma_device_type": 2 00:06:20.845 } 00:06:20.845 ], 00:06:20.845 "driver_specific": { 00:06:20.845 "passthru": { 00:06:20.845 "name": "Passthru0", 00:06:20.845 "base_bdev_name": "Malloc0" 00:06:20.845 } 00:06:20.845 } 00:06:20.845 } 00:06:20.845 ]' 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.845 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.845 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.104 08:39:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.104 00:06:21.104 real 0m0.346s 00:06:21.104 user 0m0.211s 00:06:21.104 sys 0m0.044s 00:06:21.104 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.104 ************************************ 00:06:21.104 END TEST rpc_integrity 00:06:21.104 08:39:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.104 ************************************ 00:06:21.104 08:39:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:21.104 08:39:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.104 08:39:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.104 08:39:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.104 ************************************ 00:06:21.104 START TEST rpc_plugins 00:06:21.104 ************************************ 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:21.104 { 00:06:21.104 "name": "Malloc1", 00:06:21.104 "aliases": [ 00:06:21.104 "0df44716-ecea-4cd1-89e4-e87d5de14238" 00:06:21.104 ], 00:06:21.104 "product_name": "Malloc disk", 00:06:21.104 "block_size": 4096, 00:06:21.104 "num_blocks": 256, 00:06:21.104 "uuid": "0df44716-ecea-4cd1-89e4-e87d5de14238", 00:06:21.104 "assigned_rate_limits": { 00:06:21.104 "rw_ios_per_sec": 0, 00:06:21.104 "rw_mbytes_per_sec": 0, 00:06:21.104 "r_mbytes_per_sec": 0, 00:06:21.104 "w_mbytes_per_sec": 0 00:06:21.104 }, 00:06:21.104 "claimed": false, 00:06:21.104 "zoned": false, 00:06:21.104 "supported_io_types": { 00:06:21.104 "read": true, 00:06:21.104 "write": true, 00:06:21.104 "unmap": true, 00:06:21.104 "flush": true, 00:06:21.104 "reset": true, 00:06:21.104 "nvme_admin": false, 00:06:21.104 "nvme_io": false, 00:06:21.104 "nvme_io_md": false, 00:06:21.104 "write_zeroes": true, 00:06:21.104 "zcopy": true, 00:06:21.104 "get_zone_info": false, 00:06:21.104 "zone_management": false, 00:06:21.104 "zone_append": false, 00:06:21.104 "compare": false, 00:06:21.104 "compare_and_write": false, 00:06:21.104 "abort": true, 00:06:21.104 "seek_hole": false, 00:06:21.104 "seek_data": false, 00:06:21.104 "copy": true, 00:06:21.104 "nvme_iov_md": false 00:06:21.104 }, 00:06:21.104 "memory_domains": [ 00:06:21.104 { 00:06:21.104 "dma_device_id": "system", 00:06:21.104 "dma_device_type": 1 00:06:21.104 }, 00:06:21.104 { 00:06:21.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.104 "dma_device_type": 2 00:06:21.104 } 00:06:21.104 ], 00:06:21.104 "driver_specific": {} 00:06:21.104 } 00:06:21.104 ]' 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.104 08:39:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:21.104 08:39:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:21.104 08:39:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:21.104 00:06:21.104 real 0m0.175s 00:06:21.104 user 0m0.118s 00:06:21.104 sys 0m0.016s 00:06:21.104 ************************************ 00:06:21.104 END TEST rpc_plugins 00:06:21.104 ************************************ 00:06:21.105 08:39:52 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.105 08:39:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.364 08:39:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:21.364 08:39:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.364 08:39:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.364 08:39:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.364 ************************************ 00:06:21.364 START TEST rpc_trace_cmd_test 00:06:21.364 ************************************ 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:21.364 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56814", 00:06:21.364 "tpoint_group_mask": "0x8", 00:06:21.364 "iscsi_conn": { 00:06:21.364 "mask": "0x2", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "scsi": { 00:06:21.364 "mask": "0x4", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "bdev": { 00:06:21.364 "mask": "0x8", 00:06:21.364 "tpoint_mask": "0xffffffffffffffff" 00:06:21.364 }, 00:06:21.364 "nvmf_rdma": { 00:06:21.364 "mask": "0x10", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "nvmf_tcp": { 00:06:21.364 "mask": "0x20", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "ftl": { 00:06:21.364 "mask": "0x40", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "blobfs": { 00:06:21.364 "mask": "0x80", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "dsa": { 00:06:21.364 "mask": "0x200", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "thread": { 00:06:21.364 "mask": "0x400", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "nvme_pcie": { 00:06:21.364 "mask": "0x800", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "iaa": { 00:06:21.364 "mask": "0x1000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "nvme_tcp": { 00:06:21.364 "mask": "0x2000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "bdev_nvme": { 00:06:21.364 "mask": "0x4000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "sock": { 00:06:21.364 "mask": "0x8000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "blob": { 00:06:21.364 "mask": "0x10000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "bdev_raid": { 00:06:21.364 "mask": "0x20000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 }, 00:06:21.364 "scheduler": { 00:06:21.364 "mask": "0x40000", 00:06:21.364 "tpoint_mask": "0x0" 00:06:21.364 } 00:06:21.364 }' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:21.364 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:21.624 08:39:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:21.624 00:06:21.624 real 0m0.267s 00:06:21.624 user 0m0.231s 00:06:21.624 sys 0m0.027s 00:06:21.624 ************************************ 00:06:21.624 END TEST rpc_trace_cmd_test 00:06:21.624 ************************************ 00:06:21.624 08:39:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.624 08:39:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.624 08:39:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:21.624 08:39:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:21.624 08:39:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:21.624 08:39:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.624 08:39:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.624 08:39:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.624 ************************************ 00:06:21.624 START TEST rpc_daemon_integrity 00:06:21.624 ************************************ 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.624 { 00:06:21.624 "name": "Malloc2", 00:06:21.624 "aliases": [ 00:06:21.624 "60f58488-fb8f-4fbe-b8c5-0177d1adbec7" 00:06:21.624 ], 00:06:21.624 "product_name": "Malloc disk", 00:06:21.624 "block_size": 512, 00:06:21.624 "num_blocks": 16384, 00:06:21.624 "uuid": "60f58488-fb8f-4fbe-b8c5-0177d1adbec7", 00:06:21.624 "assigned_rate_limits": { 00:06:21.624 "rw_ios_per_sec": 0, 00:06:21.624 "rw_mbytes_per_sec": 0, 00:06:21.624 "r_mbytes_per_sec": 0, 00:06:21.624 "w_mbytes_per_sec": 0 00:06:21.624 }, 00:06:21.624 "claimed": false, 00:06:21.624 "zoned": false, 00:06:21.624 "supported_io_types": { 00:06:21.624 "read": true, 00:06:21.624 "write": true, 00:06:21.624 "unmap": true, 00:06:21.624 "flush": true, 00:06:21.624 "reset": true, 00:06:21.624 "nvme_admin": false, 00:06:21.624 "nvme_io": false, 00:06:21.624 "nvme_io_md": false, 00:06:21.624 "write_zeroes": true, 00:06:21.624 "zcopy": true, 00:06:21.624 "get_zone_info": false, 00:06:21.624 "zone_management": false, 00:06:21.624 "zone_append": false, 00:06:21.624 "compare": false, 00:06:21.624 "compare_and_write": false, 00:06:21.624 "abort": true, 00:06:21.624 "seek_hole": false, 00:06:21.624 "seek_data": false, 00:06:21.624 "copy": true, 00:06:21.624 "nvme_iov_md": false 00:06:21.624 }, 00:06:21.624 "memory_domains": [ 00:06:21.624 { 00:06:21.624 "dma_device_id": "system", 00:06:21.624 "dma_device_type": 1 00:06:21.624 }, 00:06:21.624 { 00:06:21.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.624 "dma_device_type": 2 00:06:21.624 } 00:06:21.624 ], 00:06:21.624 "driver_specific": {} 00:06:21.624 } 00:06:21.624 ]' 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.624 [2024-11-20 08:39:52.527797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:21.624 [2024-11-20 08:39:52.527871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.624 [2024-11-20 08:39:52.527915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:21.624 [2024-11-20 08:39:52.527944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.624 [2024-11-20 08:39:52.530928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.624 [2024-11-20 08:39:52.530989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.624 Passthru0 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.624 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.883 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.883 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.883 { 00:06:21.883 "name": "Malloc2", 00:06:21.883 "aliases": [ 00:06:21.883 "60f58488-fb8f-4fbe-b8c5-0177d1adbec7" 00:06:21.883 ], 00:06:21.883 "product_name": "Malloc disk", 00:06:21.883 "block_size": 512, 00:06:21.883 "num_blocks": 16384, 00:06:21.883 "uuid": "60f58488-fb8f-4fbe-b8c5-0177d1adbec7", 00:06:21.883 "assigned_rate_limits": { 00:06:21.883 "rw_ios_per_sec": 0, 00:06:21.883 "rw_mbytes_per_sec": 0, 00:06:21.883 "r_mbytes_per_sec": 0, 00:06:21.883 "w_mbytes_per_sec": 0 00:06:21.883 }, 00:06:21.883 "claimed": true, 00:06:21.883 "claim_type": "exclusive_write", 00:06:21.883 "zoned": false, 00:06:21.883 "supported_io_types": { 00:06:21.883 "read": true, 00:06:21.883 "write": true, 00:06:21.883 "unmap": true, 00:06:21.883 "flush": true, 00:06:21.883 "reset": true, 00:06:21.883 "nvme_admin": false, 00:06:21.883 "nvme_io": false, 00:06:21.883 "nvme_io_md": false, 00:06:21.883 "write_zeroes": true, 00:06:21.883 "zcopy": true, 00:06:21.883 "get_zone_info": false, 00:06:21.883 "zone_management": false, 00:06:21.883 "zone_append": false, 00:06:21.883 "compare": false, 00:06:21.883 "compare_and_write": false, 00:06:21.883 "abort": true, 00:06:21.883 "seek_hole": false, 00:06:21.883 "seek_data": false, 00:06:21.883 "copy": true, 00:06:21.883 "nvme_iov_md": false 00:06:21.883 }, 00:06:21.883 "memory_domains": [ 00:06:21.883 { 00:06:21.883 "dma_device_id": "system", 00:06:21.883 "dma_device_type": 1 00:06:21.883 }, 00:06:21.883 { 00:06:21.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.883 "dma_device_type": 2 00:06:21.883 } 00:06:21.883 ], 00:06:21.883 "driver_specific": {} 00:06:21.883 }, 00:06:21.883 { 00:06:21.883 "name": "Passthru0", 00:06:21.883 "aliases": [ 00:06:21.883 "f294f71c-eab1-5340-a1de-3020aa61af2a" 00:06:21.883 ], 00:06:21.883 "product_name": "passthru", 00:06:21.883 "block_size": 512, 00:06:21.883 "num_blocks": 16384, 00:06:21.883 "uuid": "f294f71c-eab1-5340-a1de-3020aa61af2a", 00:06:21.883 "assigned_rate_limits": { 00:06:21.883 "rw_ios_per_sec": 0, 00:06:21.883 "rw_mbytes_per_sec": 0, 00:06:21.883 "r_mbytes_per_sec": 0, 00:06:21.883 "w_mbytes_per_sec": 0 00:06:21.883 }, 00:06:21.883 "claimed": false, 00:06:21.883 "zoned": false, 00:06:21.883 "supported_io_types": { 00:06:21.883 "read": true, 00:06:21.883 "write": true, 00:06:21.883 "unmap": true, 00:06:21.883 "flush": true, 00:06:21.883 "reset": true, 00:06:21.883 "nvme_admin": false, 00:06:21.883 "nvme_io": false, 00:06:21.883 "nvme_io_md": false, 00:06:21.883 "write_zeroes": true, 00:06:21.883 "zcopy": true, 00:06:21.883 "get_zone_info": false, 00:06:21.883 "zone_management": false, 00:06:21.883 "zone_append": false, 00:06:21.883 "compare": false, 00:06:21.883 "compare_and_write": false, 00:06:21.883 "abort": true, 00:06:21.883 "seek_hole": false, 00:06:21.883 "seek_data": false, 00:06:21.883 "copy": true, 00:06:21.883 "nvme_iov_md": false 00:06:21.883 }, 00:06:21.883 "memory_domains": [ 00:06:21.883 { 00:06:21.883 "dma_device_id": "system", 00:06:21.883 "dma_device_type": 1 00:06:21.883 }, 00:06:21.883 { 00:06:21.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.883 "dma_device_type": 2 00:06:21.883 } 00:06:21.883 ], 00:06:21.883 "driver_specific": { 00:06:21.883 "passthru": { 00:06:21.883 "name": "Passthru0", 00:06:21.883 "base_bdev_name": "Malloc2" 00:06:21.883 } 00:06:21.883 } 00:06:21.883 } 00:06:21.883 ]' 00:06:21.883 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.883 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.884 00:06:21.884 real 0m0.351s 00:06:21.884 user 0m0.214s 00:06:21.884 sys 0m0.039s 00:06:21.884 ************************************ 00:06:21.884 END TEST rpc_daemon_integrity 00:06:21.884 ************************************ 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.884 08:39:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.884 08:39:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:21.884 08:39:52 rpc -- rpc/rpc.sh@84 -- # killprocess 56814 00:06:21.884 08:39:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 56814 ']' 00:06:21.884 08:39:52 rpc -- common/autotest_common.sh@958 -- # kill -0 56814 00:06:21.884 08:39:52 rpc -- common/autotest_common.sh@959 -- # uname 00:06:21.884 08:39:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.884 08:39:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56814 00:06:22.143 08:39:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.143 08:39:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.143 killing process with pid 56814 00:06:22.143 08:39:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56814' 00:06:22.143 08:39:52 rpc -- common/autotest_common.sh@973 -- # kill 56814 00:06:22.143 08:39:52 rpc -- common/autotest_common.sh@978 -- # wait 56814 00:06:24.677 00:06:24.677 real 0m5.309s 00:06:24.677 user 0m6.027s 00:06:24.677 sys 0m0.927s 00:06:24.677 08:39:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.677 ************************************ 00:06:24.677 END TEST rpc 00:06:24.677 ************************************ 00:06:24.677 08:39:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.677 08:39:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:24.677 08:39:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.677 08:39:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.677 08:39:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.677 ************************************ 00:06:24.677 START TEST skip_rpc 00:06:24.677 ************************************ 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:24.677 * Looking for test storage... 00:06:24.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.677 08:39:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.677 --rc genhtml_branch_coverage=1 00:06:24.677 --rc genhtml_function_coverage=1 00:06:24.677 --rc genhtml_legend=1 00:06:24.677 --rc geninfo_all_blocks=1 00:06:24.677 --rc geninfo_unexecuted_blocks=1 00:06:24.677 00:06:24.677 ' 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.677 --rc genhtml_branch_coverage=1 00:06:24.677 --rc genhtml_function_coverage=1 00:06:24.677 --rc genhtml_legend=1 00:06:24.677 --rc geninfo_all_blocks=1 00:06:24.677 --rc geninfo_unexecuted_blocks=1 00:06:24.677 00:06:24.677 ' 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.677 --rc genhtml_branch_coverage=1 00:06:24.677 --rc genhtml_function_coverage=1 00:06:24.677 --rc genhtml_legend=1 00:06:24.677 --rc geninfo_all_blocks=1 00:06:24.677 --rc geninfo_unexecuted_blocks=1 00:06:24.677 00:06:24.677 ' 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.677 --rc genhtml_branch_coverage=1 00:06:24.677 --rc genhtml_function_coverage=1 00:06:24.677 --rc genhtml_legend=1 00:06:24.677 --rc geninfo_all_blocks=1 00:06:24.677 --rc geninfo_unexecuted_blocks=1 00:06:24.677 00:06:24.677 ' 00:06:24.677 08:39:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.677 08:39:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.677 08:39:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.677 08:39:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.677 ************************************ 00:06:24.677 START TEST skip_rpc 00:06:24.677 ************************************ 00:06:24.677 08:39:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:24.677 08:39:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57043 00:06:24.677 08:39:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.677 08:39:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:24.677 08:39:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:24.677 [2024-11-20 08:39:55.489477] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:24.678 [2024-11-20 08:39:55.489680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57043 ] 00:06:24.976 [2024-11-20 08:39:55.688402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.976 [2024-11-20 08:39:55.852583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57043 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57043 ']' 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57043 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57043 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.241 killing process with pid 57043 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57043' 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57043 00:06:30.241 08:40:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57043 00:06:32.260 00:06:32.260 real 0m7.331s 00:06:32.260 user 0m6.754s 00:06:32.260 sys 0m0.472s 00:06:32.260 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.260 ************************************ 00:06:32.260 END TEST skip_rpc 00:06:32.260 ************************************ 00:06:32.260 08:40:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.260 08:40:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:32.260 08:40:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.260 08:40:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.260 08:40:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.260 ************************************ 00:06:32.260 START TEST skip_rpc_with_json 00:06:32.260 ************************************ 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57147 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57147 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57147 ']' 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.260 08:40:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.260 [2024-11-20 08:40:02.874677] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:32.260 [2024-11-20 08:40:02.874872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57147 ] 00:06:32.260 [2024-11-20 08:40:03.063683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.519 [2024-11-20 08:40:03.207810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.456 [2024-11-20 08:40:04.155781] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.456 request: 00:06:33.456 { 00:06:33.456 "trtype": "tcp", 00:06:33.456 "method": "nvmf_get_transports", 00:06:33.456 "req_id": 1 00:06:33.456 } 00:06:33.456 Got JSON-RPC error response 00:06:33.456 response: 00:06:33.456 { 00:06:33.456 "code": -19, 00:06:33.456 "message": "No such device" 00:06:33.456 } 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.456 [2024-11-20 08:40:04.167882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.456 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.456 { 00:06:33.456 "subsystems": [ 00:06:33.456 { 00:06:33.456 "subsystem": "fsdev", 00:06:33.456 "config": [ 00:06:33.456 { 00:06:33.456 "method": "fsdev_set_opts", 00:06:33.456 "params": { 00:06:33.456 "fsdev_io_pool_size": 65535, 00:06:33.456 "fsdev_io_cache_size": 256 00:06:33.456 } 00:06:33.456 } 00:06:33.456 ] 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "subsystem": "keyring", 00:06:33.456 "config": [] 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "subsystem": "iobuf", 00:06:33.456 "config": [ 00:06:33.456 { 00:06:33.456 "method": "iobuf_set_options", 00:06:33.456 "params": { 00:06:33.456 "small_pool_count": 8192, 00:06:33.456 "large_pool_count": 1024, 00:06:33.456 "small_bufsize": 8192, 00:06:33.456 "large_bufsize": 135168, 00:06:33.456 "enable_numa": false 00:06:33.456 } 00:06:33.456 } 00:06:33.456 ] 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "subsystem": "sock", 00:06:33.456 "config": [ 00:06:33.456 { 00:06:33.456 "method": "sock_set_default_impl", 00:06:33.456 "params": { 00:06:33.456 "impl_name": "posix" 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "sock_impl_set_options", 00:06:33.456 "params": { 00:06:33.456 "impl_name": "ssl", 00:06:33.456 "recv_buf_size": 4096, 00:06:33.456 "send_buf_size": 4096, 00:06:33.456 "enable_recv_pipe": true, 00:06:33.456 "enable_quickack": false, 00:06:33.456 "enable_placement_id": 0, 00:06:33.456 "enable_zerocopy_send_server": true, 00:06:33.456 "enable_zerocopy_send_client": false, 00:06:33.456 "zerocopy_threshold": 0, 00:06:33.456 "tls_version": 0, 00:06:33.456 "enable_ktls": false 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "sock_impl_set_options", 00:06:33.456 "params": { 00:06:33.456 "impl_name": "posix", 00:06:33.456 "recv_buf_size": 2097152, 00:06:33.456 "send_buf_size": 2097152, 00:06:33.456 "enable_recv_pipe": true, 00:06:33.456 "enable_quickack": false, 00:06:33.456 "enable_placement_id": 0, 00:06:33.456 "enable_zerocopy_send_server": true, 00:06:33.456 "enable_zerocopy_send_client": false, 00:06:33.456 "zerocopy_threshold": 0, 00:06:33.456 "tls_version": 0, 00:06:33.456 "enable_ktls": false 00:06:33.456 } 00:06:33.456 } 00:06:33.456 ] 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "subsystem": "vmd", 00:06:33.456 "config": [] 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "subsystem": "accel", 00:06:33.456 "config": [ 00:06:33.456 { 00:06:33.456 "method": "accel_set_options", 00:06:33.456 "params": { 00:06:33.456 "small_cache_size": 128, 00:06:33.456 "large_cache_size": 16, 00:06:33.456 "task_count": 2048, 00:06:33.456 "sequence_count": 2048, 00:06:33.456 "buf_count": 2048 00:06:33.456 } 00:06:33.456 } 00:06:33.456 ] 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "subsystem": "bdev", 00:06:33.456 "config": [ 00:06:33.456 { 00:06:33.456 "method": "bdev_set_options", 00:06:33.456 "params": { 00:06:33.456 "bdev_io_pool_size": 65535, 00:06:33.456 "bdev_io_cache_size": 256, 00:06:33.456 "bdev_auto_examine": true, 00:06:33.456 "iobuf_small_cache_size": 128, 00:06:33.456 "iobuf_large_cache_size": 16 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "bdev_raid_set_options", 00:06:33.456 "params": { 00:06:33.456 "process_window_size_kb": 1024, 00:06:33.456 "process_max_bandwidth_mb_sec": 0 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "bdev_iscsi_set_options", 00:06:33.456 "params": { 00:06:33.456 "timeout_sec": 30 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "bdev_nvme_set_options", 00:06:33.456 "params": { 00:06:33.456 "action_on_timeout": "none", 00:06:33.456 "timeout_us": 0, 00:06:33.456 "timeout_admin_us": 0, 00:06:33.456 "keep_alive_timeout_ms": 10000, 00:06:33.456 "arbitration_burst": 0, 00:06:33.456 "low_priority_weight": 0, 00:06:33.456 "medium_priority_weight": 0, 00:06:33.456 "high_priority_weight": 0, 00:06:33.456 "nvme_adminq_poll_period_us": 10000, 00:06:33.456 "nvme_ioq_poll_period_us": 0, 00:06:33.456 "io_queue_requests": 0, 00:06:33.456 "delay_cmd_submit": true, 00:06:33.456 "transport_retry_count": 4, 00:06:33.456 "bdev_retry_count": 3, 00:06:33.456 "transport_ack_timeout": 0, 00:06:33.456 "ctrlr_loss_timeout_sec": 0, 00:06:33.456 "reconnect_delay_sec": 0, 00:06:33.456 "fast_io_fail_timeout_sec": 0, 00:06:33.456 "disable_auto_failback": false, 00:06:33.456 "generate_uuids": false, 00:06:33.456 "transport_tos": 0, 00:06:33.456 "nvme_error_stat": false, 00:06:33.456 "rdma_srq_size": 0, 00:06:33.456 "io_path_stat": false, 00:06:33.456 "allow_accel_sequence": false, 00:06:33.456 "rdma_max_cq_size": 0, 00:06:33.456 "rdma_cm_event_timeout_ms": 0, 00:06:33.456 "dhchap_digests": [ 00:06:33.456 "sha256", 00:06:33.456 "sha384", 00:06:33.456 "sha512" 00:06:33.456 ], 00:06:33.456 "dhchap_dhgroups": [ 00:06:33.456 "null", 00:06:33.456 "ffdhe2048", 00:06:33.456 "ffdhe3072", 00:06:33.456 "ffdhe4096", 00:06:33.456 "ffdhe6144", 00:06:33.456 "ffdhe8192" 00:06:33.456 ] 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "bdev_nvme_set_hotplug", 00:06:33.456 "params": { 00:06:33.456 "period_us": 100000, 00:06:33.456 "enable": false 00:06:33.456 } 00:06:33.456 }, 00:06:33.456 { 00:06:33.456 "method": "bdev_wait_for_examine" 00:06:33.456 } 00:06:33.456 ] 00:06:33.456 }, 00:06:33.456 { 00:06:33.457 "subsystem": "scsi", 00:06:33.457 "config": null 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "scheduler", 00:06:33.457 "config": [ 00:06:33.457 { 00:06:33.457 "method": "framework_set_scheduler", 00:06:33.457 "params": { 00:06:33.457 "name": "static" 00:06:33.457 } 00:06:33.457 } 00:06:33.457 ] 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "vhost_scsi", 00:06:33.457 "config": [] 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "vhost_blk", 00:06:33.457 "config": [] 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "ublk", 00:06:33.457 "config": [] 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "nbd", 00:06:33.457 "config": [] 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "nvmf", 00:06:33.457 "config": [ 00:06:33.457 { 00:06:33.457 "method": "nvmf_set_config", 00:06:33.457 "params": { 00:06:33.457 "discovery_filter": "match_any", 00:06:33.457 "admin_cmd_passthru": { 00:06:33.457 "identify_ctrlr": false 00:06:33.457 }, 00:06:33.457 "dhchap_digests": [ 00:06:33.457 "sha256", 00:06:33.457 "sha384", 00:06:33.457 "sha512" 00:06:33.457 ], 00:06:33.457 "dhchap_dhgroups": [ 00:06:33.457 "null", 00:06:33.457 "ffdhe2048", 00:06:33.457 "ffdhe3072", 00:06:33.457 "ffdhe4096", 00:06:33.457 "ffdhe6144", 00:06:33.457 "ffdhe8192" 00:06:33.457 ] 00:06:33.457 } 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "method": "nvmf_set_max_subsystems", 00:06:33.457 "params": { 00:06:33.457 "max_subsystems": 1024 00:06:33.457 } 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "method": "nvmf_set_crdt", 00:06:33.457 "params": { 00:06:33.457 "crdt1": 0, 00:06:33.457 "crdt2": 0, 00:06:33.457 "crdt3": 0 00:06:33.457 } 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "method": "nvmf_create_transport", 00:06:33.457 "params": { 00:06:33.457 "trtype": "TCP", 00:06:33.457 "max_queue_depth": 128, 00:06:33.457 "max_io_qpairs_per_ctrlr": 127, 00:06:33.457 "in_capsule_data_size": 4096, 00:06:33.457 "max_io_size": 131072, 00:06:33.457 "io_unit_size": 131072, 00:06:33.457 "max_aq_depth": 128, 00:06:33.457 "num_shared_buffers": 511, 00:06:33.457 "buf_cache_size": 4294967295, 00:06:33.457 "dif_insert_or_strip": false, 00:06:33.457 "zcopy": false, 00:06:33.457 "c2h_success": true, 00:06:33.457 "sock_priority": 0, 00:06:33.457 "abort_timeout_sec": 1, 00:06:33.457 "ack_timeout": 0, 00:06:33.457 "data_wr_pool_size": 0 00:06:33.457 } 00:06:33.457 } 00:06:33.457 ] 00:06:33.457 }, 00:06:33.457 { 00:06:33.457 "subsystem": "iscsi", 00:06:33.457 "config": [ 00:06:33.457 { 00:06:33.457 "method": "iscsi_set_options", 00:06:33.457 "params": { 00:06:33.457 "node_base": "iqn.2016-06.io.spdk", 00:06:33.457 "max_sessions": 128, 00:06:33.457 "max_connections_per_session": 2, 00:06:33.457 "max_queue_depth": 64, 00:06:33.457 "default_time2wait": 2, 00:06:33.457 "default_time2retain": 20, 00:06:33.457 "first_burst_length": 8192, 00:06:33.457 "immediate_data": true, 00:06:33.457 "allow_duplicated_isid": false, 00:06:33.457 "error_recovery_level": 0, 00:06:33.457 "nop_timeout": 60, 00:06:33.457 "nop_in_interval": 30, 00:06:33.457 "disable_chap": false, 00:06:33.457 "require_chap": false, 00:06:33.457 "mutual_chap": false, 00:06:33.457 "chap_group": 0, 00:06:33.457 "max_large_datain_per_connection": 64, 00:06:33.457 "max_r2t_per_connection": 4, 00:06:33.457 "pdu_pool_size": 36864, 00:06:33.457 "immediate_data_pool_size": 16384, 00:06:33.457 "data_out_pool_size": 2048 00:06:33.457 } 00:06:33.457 } 00:06:33.457 ] 00:06:33.457 } 00:06:33.457 ] 00:06:33.457 } 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57147 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57147 ']' 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57147 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.457 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57147 00:06:33.717 killing process with pid 57147 00:06:33.717 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.717 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.717 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57147' 00:06:33.717 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57147 00:06:33.717 08:40:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57147 00:06:36.264 08:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57203 00:06:36.264 08:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:36.264 08:40:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57203 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57203 ']' 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57203 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57203 00:06:41.533 killing process with pid 57203 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57203' 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57203 00:06:41.533 08:40:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57203 00:06:43.456 08:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:43.456 08:40:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:43.456 ************************************ 00:06:43.456 END TEST skip_rpc_with_json 00:06:43.456 ************************************ 00:06:43.456 00:06:43.456 real 0m11.324s 00:06:43.456 user 0m10.760s 00:06:43.456 sys 0m1.078s 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.457 08:40:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:43.457 08:40:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.457 08:40:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.457 08:40:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.457 ************************************ 00:06:43.457 START TEST skip_rpc_with_delay 00:06:43.457 ************************************ 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.457 [2024-11-20 08:40:14.249125] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:43.457 ************************************ 00:06:43.457 END TEST skip_rpc_with_delay 00:06:43.457 ************************************ 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.457 00:06:43.457 real 0m0.204s 00:06:43.457 user 0m0.106s 00:06:43.457 sys 0m0.096s 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.457 08:40:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:43.457 08:40:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:43.457 08:40:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:43.457 08:40:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:43.457 08:40:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.457 08:40:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.457 08:40:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 ************************************ 00:06:43.715 START TEST exit_on_failed_rpc_init 00:06:43.715 ************************************ 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57336 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57336 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57336 ']' 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.715 08:40:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 [2024-11-20 08:40:14.496770] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:43.715 [2024-11-20 08:40:14.497185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57336 ] 00:06:43.975 [2024-11-20 08:40:14.673585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.975 [2024-11-20 08:40:14.812767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:44.911 08:40:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:45.170 [2024-11-20 08:40:15.937868] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:45.170 [2024-11-20 08:40:15.938059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57360 ] 00:06:45.428 [2024-11-20 08:40:16.131135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.428 [2024-11-20 08:40:16.324117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.428 [2024-11-20 08:40:16.324293] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:45.428 [2024-11-20 08:40:16.324322] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:45.428 [2024-11-20 08:40:16.324349] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57336 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57336 ']' 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57336 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57336 00:06:45.996 killing process with pid 57336 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57336' 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57336 00:06:45.996 08:40:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57336 00:06:48.527 00:06:48.527 real 0m4.599s 00:06:48.527 user 0m5.137s 00:06:48.527 sys 0m0.756s 00:06:48.527 08:40:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.527 ************************************ 00:06:48.527 END TEST exit_on_failed_rpc_init 00:06:48.527 ************************************ 00:06:48.527 08:40:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.527 08:40:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.527 00:06:48.527 real 0m23.865s 00:06:48.527 user 0m22.949s 00:06:48.527 sys 0m2.617s 00:06:48.527 08:40:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.527 ************************************ 00:06:48.527 08:40:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.527 END TEST skip_rpc 00:06:48.527 ************************************ 00:06:48.527 08:40:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:48.527 08:40:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.527 08:40:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.527 08:40:19 -- common/autotest_common.sh@10 -- # set +x 00:06:48.527 ************************************ 00:06:48.527 START TEST rpc_client 00:06:48.527 ************************************ 00:06:48.527 08:40:19 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:48.527 * Looking for test storage... 00:06:48.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:48.527 08:40:19 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.527 08:40:19 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.527 08:40:19 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.527 08:40:19 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.528 08:40:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.528 --rc genhtml_branch_coverage=1 00:06:48.528 --rc genhtml_function_coverage=1 00:06:48.528 --rc genhtml_legend=1 00:06:48.528 --rc geninfo_all_blocks=1 00:06:48.528 --rc geninfo_unexecuted_blocks=1 00:06:48.528 00:06:48.528 ' 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.528 --rc genhtml_branch_coverage=1 00:06:48.528 --rc genhtml_function_coverage=1 00:06:48.528 --rc genhtml_legend=1 00:06:48.528 --rc geninfo_all_blocks=1 00:06:48.528 --rc geninfo_unexecuted_blocks=1 00:06:48.528 00:06:48.528 ' 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.528 --rc genhtml_branch_coverage=1 00:06:48.528 --rc genhtml_function_coverage=1 00:06:48.528 --rc genhtml_legend=1 00:06:48.528 --rc geninfo_all_blocks=1 00:06:48.528 --rc geninfo_unexecuted_blocks=1 00:06:48.528 00:06:48.528 ' 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.528 --rc genhtml_branch_coverage=1 00:06:48.528 --rc genhtml_function_coverage=1 00:06:48.528 --rc genhtml_legend=1 00:06:48.528 --rc geninfo_all_blocks=1 00:06:48.528 --rc geninfo_unexecuted_blocks=1 00:06:48.528 00:06:48.528 ' 00:06:48.528 08:40:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:48.528 OK 00:06:48.528 08:40:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:48.528 00:06:48.528 real 0m0.261s 00:06:48.528 user 0m0.150s 00:06:48.528 sys 0m0.119s 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.528 08:40:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:48.528 ************************************ 00:06:48.528 END TEST rpc_client 00:06:48.528 ************************************ 00:06:48.528 08:40:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.528 08:40:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.528 08:40:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.528 08:40:19 -- common/autotest_common.sh@10 -- # set +x 00:06:48.528 ************************************ 00:06:48.528 START TEST json_config 00:06:48.528 ************************************ 00:06:48.528 08:40:19 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.528 08:40:19 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.528 08:40:19 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.528 08:40:19 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.787 08:40:19 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.787 08:40:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.787 08:40:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.787 08:40:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.787 08:40:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.787 08:40:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.787 08:40:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:48.787 08:40:19 json_config -- scripts/common.sh@345 -- # : 1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.787 08:40:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.787 08:40:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@353 -- # local d=1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.787 08:40:19 json_config -- scripts/common.sh@355 -- # echo 1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.787 08:40:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@353 -- # local d=2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.787 08:40:19 json_config -- scripts/common.sh@355 -- # echo 2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.787 08:40:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.787 08:40:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.787 08:40:19 json_config -- scripts/common.sh@368 -- # return 0 00:06:48.787 08:40:19 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.787 08:40:19 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.787 --rc genhtml_branch_coverage=1 00:06:48.787 --rc genhtml_function_coverage=1 00:06:48.787 --rc genhtml_legend=1 00:06:48.787 --rc geninfo_all_blocks=1 00:06:48.787 --rc geninfo_unexecuted_blocks=1 00:06:48.787 00:06:48.787 ' 00:06:48.787 08:40:19 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.787 --rc genhtml_branch_coverage=1 00:06:48.787 --rc genhtml_function_coverage=1 00:06:48.787 --rc genhtml_legend=1 00:06:48.787 --rc geninfo_all_blocks=1 00:06:48.787 --rc geninfo_unexecuted_blocks=1 00:06:48.787 00:06:48.787 ' 00:06:48.787 08:40:19 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.787 --rc genhtml_branch_coverage=1 00:06:48.787 --rc genhtml_function_coverage=1 00:06:48.787 --rc genhtml_legend=1 00:06:48.787 --rc geninfo_all_blocks=1 00:06:48.787 --rc geninfo_unexecuted_blocks=1 00:06:48.787 00:06:48.787 ' 00:06:48.787 08:40:19 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.787 --rc genhtml_branch_coverage=1 00:06:48.787 --rc genhtml_function_coverage=1 00:06:48.787 --rc genhtml_legend=1 00:06:48.787 --rc geninfo_all_blocks=1 00:06:48.787 --rc geninfo_unexecuted_blocks=1 00:06:48.787 00:06:48.787 ' 00:06:48.787 08:40:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36bf68f8-f61b-455b-8e26-5b6a0b1cc387 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=36bf68f8-f61b-455b-8e26-5b6a0b1cc387 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.787 08:40:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.787 08:40:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.787 08:40:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.787 08:40:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.787 08:40:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.787 08:40:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.787 08:40:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.787 08:40:19 json_config -- paths/export.sh@5 -- # export PATH 00:06:48.787 08:40:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@51 -- # : 0 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.787 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.787 08:40:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:48.788 WARNING: No tests are enabled so not running JSON configuration tests 00:06:48.788 08:40:19 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:48.788 00:06:48.788 real 0m0.182s 00:06:48.788 user 0m0.116s 00:06:48.788 sys 0m0.067s 00:06:48.788 08:40:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.788 ************************************ 00:06:48.788 END TEST json_config 00:06:48.788 ************************************ 00:06:48.788 08:40:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.788 08:40:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.788 08:40:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.788 08:40:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.788 08:40:19 -- common/autotest_common.sh@10 -- # set +x 00:06:48.788 ************************************ 00:06:48.788 START TEST json_config_extra_key 00:06:48.788 ************************************ 00:06:48.788 08:40:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.788 08:40:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.788 08:40:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.788 08:40:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.048 08:40:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.048 08:40:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:49.048 08:40:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.048 08:40:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.048 --rc genhtml_branch_coverage=1 00:06:49.048 --rc genhtml_function_coverage=1 00:06:49.048 --rc genhtml_legend=1 00:06:49.048 --rc geninfo_all_blocks=1 00:06:49.048 --rc geninfo_unexecuted_blocks=1 00:06:49.048 00:06:49.048 ' 00:06:49.048 08:40:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.048 --rc genhtml_branch_coverage=1 00:06:49.048 --rc genhtml_function_coverage=1 00:06:49.048 --rc genhtml_legend=1 00:06:49.048 --rc geninfo_all_blocks=1 00:06:49.048 --rc geninfo_unexecuted_blocks=1 00:06:49.048 00:06:49.048 ' 00:06:49.048 08:40:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.048 --rc genhtml_branch_coverage=1 00:06:49.048 --rc genhtml_function_coverage=1 00:06:49.048 --rc genhtml_legend=1 00:06:49.048 --rc geninfo_all_blocks=1 00:06:49.048 --rc geninfo_unexecuted_blocks=1 00:06:49.048 00:06:49.049 ' 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.049 --rc genhtml_branch_coverage=1 00:06:49.049 --rc genhtml_function_coverage=1 00:06:49.049 --rc genhtml_legend=1 00:06:49.049 --rc geninfo_all_blocks=1 00:06:49.049 --rc geninfo_unexecuted_blocks=1 00:06:49.049 00:06:49.049 ' 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36bf68f8-f61b-455b-8e26-5b6a0b1cc387 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=36bf68f8-f61b-455b-8e26-5b6a0b1cc387 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.049 08:40:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.049 08:40:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.049 08:40:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.049 08:40:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.049 08:40:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.049 08:40:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.049 08:40:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.049 08:40:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:49.049 08:40:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.049 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.049 08:40:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:49.049 INFO: launching applications... 00:06:49.049 08:40:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:49.049 Waiting for target to run... 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57570 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57570 /var/tmp/spdk_tgt.sock 00:06:49.049 08:40:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57570 ']' 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:49.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.049 08:40:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:49.049 [2024-11-20 08:40:19.925129] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:49.049 [2024-11-20 08:40:19.925708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57570 ] 00:06:49.617 [2024-11-20 08:40:20.407036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.876 [2024-11-20 08:40:20.558129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.443 08:40:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.443 08:40:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:50.443 00:06:50.443 INFO: shutting down applications... 00:06:50.443 08:40:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:50.443 08:40:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:50.443 08:40:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:50.443 08:40:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:50.443 08:40:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:50.443 08:40:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57570 ]] 00:06:50.444 08:40:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57570 00:06:50.444 08:40:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:50.444 08:40:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.444 08:40:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:50.444 08:40:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.009 08:40:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.009 08:40:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.009 08:40:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:51.009 08:40:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.575 08:40:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.575 08:40:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.575 08:40:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:51.575 08:40:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.165 08:40:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.165 08:40:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.165 08:40:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:52.165 08:40:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.424 08:40:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.424 08:40:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.424 08:40:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:52.424 08:40:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.991 08:40:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.991 08:40:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.991 08:40:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:52.991 08:40:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57570 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:53.557 SPDK target shutdown done 00:06:53.557 Success 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:53.557 08:40:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:53.557 08:40:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:53.557 00:06:53.557 real 0m4.699s 00:06:53.557 user 0m4.205s 00:06:53.557 sys 0m0.692s 00:06:53.557 ************************************ 00:06:53.557 END TEST json_config_extra_key 00:06:53.557 ************************************ 00:06:53.557 08:40:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.557 08:40:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.557 08:40:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.557 08:40:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.557 08:40:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.557 08:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:53.557 ************************************ 00:06:53.557 START TEST alias_rpc 00:06:53.557 ************************************ 00:06:53.557 08:40:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.557 * Looking for test storage... 00:06:53.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:53.557 08:40:24 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.557 08:40:24 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.557 08:40:24 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.817 08:40:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.817 --rc genhtml_branch_coverage=1 00:06:53.817 --rc genhtml_function_coverage=1 00:06:53.817 --rc genhtml_legend=1 00:06:53.817 --rc geninfo_all_blocks=1 00:06:53.817 --rc geninfo_unexecuted_blocks=1 00:06:53.817 00:06:53.817 ' 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.817 --rc genhtml_branch_coverage=1 00:06:53.817 --rc genhtml_function_coverage=1 00:06:53.817 --rc genhtml_legend=1 00:06:53.817 --rc geninfo_all_blocks=1 00:06:53.817 --rc geninfo_unexecuted_blocks=1 00:06:53.817 00:06:53.817 ' 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.817 --rc genhtml_branch_coverage=1 00:06:53.817 --rc genhtml_function_coverage=1 00:06:53.817 --rc genhtml_legend=1 00:06:53.817 --rc geninfo_all_blocks=1 00:06:53.817 --rc geninfo_unexecuted_blocks=1 00:06:53.817 00:06:53.817 ' 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.817 --rc genhtml_branch_coverage=1 00:06:53.817 --rc genhtml_function_coverage=1 00:06:53.817 --rc genhtml_legend=1 00:06:53.817 --rc geninfo_all_blocks=1 00:06:53.817 --rc geninfo_unexecuted_blocks=1 00:06:53.817 00:06:53.817 ' 00:06:53.817 08:40:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.817 08:40:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57676 00:06:53.817 08:40:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.817 08:40:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57676 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57676 ']' 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.817 08:40:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 [2024-11-20 08:40:24.680332] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:53.817 [2024-11-20 08:40:24.680865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57676 ] 00:06:54.077 [2024-11-20 08:40:24.870418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.336 [2024-11-20 08:40:25.038632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.271 08:40:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.271 08:40:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:55.271 08:40:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:55.528 08:40:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57676 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57676 ']' 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57676 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57676 00:06:55.528 killing process with pid 57676 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57676' 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57676 00:06:55.528 08:40:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57676 00:06:58.124 ************************************ 00:06:58.124 END TEST alias_rpc 00:06:58.124 ************************************ 00:06:58.124 00:06:58.124 real 0m4.261s 00:06:58.124 user 0m4.464s 00:06:58.124 sys 0m0.647s 00:06:58.124 08:40:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.124 08:40:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.124 08:40:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:58.124 08:40:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:58.124 08:40:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.124 08:40:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.124 08:40:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.124 ************************************ 00:06:58.124 START TEST spdkcli_tcp 00:06:58.124 ************************************ 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:58.124 * Looking for test storage... 00:06:58.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.124 08:40:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.124 --rc genhtml_branch_coverage=1 00:06:58.124 --rc genhtml_function_coverage=1 00:06:58.124 --rc genhtml_legend=1 00:06:58.124 --rc geninfo_all_blocks=1 00:06:58.124 --rc geninfo_unexecuted_blocks=1 00:06:58.124 00:06:58.124 ' 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.124 --rc genhtml_branch_coverage=1 00:06:58.124 --rc genhtml_function_coverage=1 00:06:58.124 --rc genhtml_legend=1 00:06:58.124 --rc geninfo_all_blocks=1 00:06:58.124 --rc geninfo_unexecuted_blocks=1 00:06:58.124 00:06:58.124 ' 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.124 --rc genhtml_branch_coverage=1 00:06:58.124 --rc genhtml_function_coverage=1 00:06:58.124 --rc genhtml_legend=1 00:06:58.124 --rc geninfo_all_blocks=1 00:06:58.124 --rc geninfo_unexecuted_blocks=1 00:06:58.124 00:06:58.124 ' 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.124 --rc genhtml_branch_coverage=1 00:06:58.124 --rc genhtml_function_coverage=1 00:06:58.124 --rc genhtml_legend=1 00:06:58.124 --rc geninfo_all_blocks=1 00:06:58.124 --rc geninfo_unexecuted_blocks=1 00:06:58.124 00:06:58.124 ' 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:58.124 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:58.124 08:40:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.125 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57794 00:06:58.125 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:58.125 08:40:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57794 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57794 ']' 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.125 08:40:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.125 [2024-11-20 08:40:29.004657] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:06:58.125 [2024-11-20 08:40:29.005768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57794 ] 00:06:58.384 [2024-11-20 08:40:29.195403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.643 [2024-11-20 08:40:29.329461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.643 [2024-11-20 08:40:29.329477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.577 08:40:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.577 08:40:30 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:59.577 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57811 00:06:59.577 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:59.577 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:59.835 [ 00:06:59.835 "bdev_malloc_delete", 00:06:59.835 "bdev_malloc_create", 00:06:59.835 "bdev_null_resize", 00:06:59.835 "bdev_null_delete", 00:06:59.835 "bdev_null_create", 00:06:59.835 "bdev_nvme_cuse_unregister", 00:06:59.835 "bdev_nvme_cuse_register", 00:06:59.835 "bdev_opal_new_user", 00:06:59.835 "bdev_opal_set_lock_state", 00:06:59.835 "bdev_opal_delete", 00:06:59.835 "bdev_opal_get_info", 00:06:59.835 "bdev_opal_create", 00:06:59.835 "bdev_nvme_opal_revert", 00:06:59.835 "bdev_nvme_opal_init", 00:06:59.835 "bdev_nvme_send_cmd", 00:06:59.835 "bdev_nvme_set_keys", 00:06:59.835 "bdev_nvme_get_path_iostat", 00:06:59.835 "bdev_nvme_get_mdns_discovery_info", 00:06:59.835 "bdev_nvme_stop_mdns_discovery", 00:06:59.835 "bdev_nvme_start_mdns_discovery", 00:06:59.835 "bdev_nvme_set_multipath_policy", 00:06:59.835 "bdev_nvme_set_preferred_path", 00:06:59.835 "bdev_nvme_get_io_paths", 00:06:59.835 "bdev_nvme_remove_error_injection", 00:06:59.835 "bdev_nvme_add_error_injection", 00:06:59.835 "bdev_nvme_get_discovery_info", 00:06:59.835 "bdev_nvme_stop_discovery", 00:06:59.835 "bdev_nvme_start_discovery", 00:06:59.835 "bdev_nvme_get_controller_health_info", 00:06:59.835 "bdev_nvme_disable_controller", 00:06:59.835 "bdev_nvme_enable_controller", 00:06:59.835 "bdev_nvme_reset_controller", 00:06:59.835 "bdev_nvme_get_transport_statistics", 00:06:59.835 "bdev_nvme_apply_firmware", 00:06:59.835 "bdev_nvme_detach_controller", 00:06:59.835 "bdev_nvme_get_controllers", 00:06:59.835 "bdev_nvme_attach_controller", 00:06:59.835 "bdev_nvme_set_hotplug", 00:06:59.835 "bdev_nvme_set_options", 00:06:59.835 "bdev_passthru_delete", 00:06:59.835 "bdev_passthru_create", 00:06:59.835 "bdev_lvol_set_parent_bdev", 00:06:59.835 "bdev_lvol_set_parent", 00:06:59.835 "bdev_lvol_check_shallow_copy", 00:06:59.835 "bdev_lvol_start_shallow_copy", 00:06:59.835 "bdev_lvol_grow_lvstore", 00:06:59.835 "bdev_lvol_get_lvols", 00:06:59.835 "bdev_lvol_get_lvstores", 00:06:59.835 "bdev_lvol_delete", 00:06:59.835 "bdev_lvol_set_read_only", 00:06:59.835 "bdev_lvol_resize", 00:06:59.835 "bdev_lvol_decouple_parent", 00:06:59.835 "bdev_lvol_inflate", 00:06:59.835 "bdev_lvol_rename", 00:06:59.835 "bdev_lvol_clone_bdev", 00:06:59.835 "bdev_lvol_clone", 00:06:59.835 "bdev_lvol_snapshot", 00:06:59.835 "bdev_lvol_create", 00:06:59.835 "bdev_lvol_delete_lvstore", 00:06:59.835 "bdev_lvol_rename_lvstore", 00:06:59.835 "bdev_lvol_create_lvstore", 00:06:59.835 "bdev_raid_set_options", 00:06:59.835 "bdev_raid_remove_base_bdev", 00:06:59.835 "bdev_raid_add_base_bdev", 00:06:59.835 "bdev_raid_delete", 00:06:59.835 "bdev_raid_create", 00:06:59.835 "bdev_raid_get_bdevs", 00:06:59.835 "bdev_error_inject_error", 00:06:59.835 "bdev_error_delete", 00:06:59.835 "bdev_error_create", 00:06:59.835 "bdev_split_delete", 00:06:59.835 "bdev_split_create", 00:06:59.835 "bdev_delay_delete", 00:06:59.835 "bdev_delay_create", 00:06:59.835 "bdev_delay_update_latency", 00:06:59.835 "bdev_zone_block_delete", 00:06:59.835 "bdev_zone_block_create", 00:06:59.835 "blobfs_create", 00:06:59.835 "blobfs_detect", 00:06:59.835 "blobfs_set_cache_size", 00:06:59.835 "bdev_aio_delete", 00:06:59.835 "bdev_aio_rescan", 00:06:59.835 "bdev_aio_create", 00:06:59.835 "bdev_ftl_set_property", 00:06:59.835 "bdev_ftl_get_properties", 00:06:59.835 "bdev_ftl_get_stats", 00:06:59.835 "bdev_ftl_unmap", 00:06:59.835 "bdev_ftl_unload", 00:06:59.835 "bdev_ftl_delete", 00:06:59.835 "bdev_ftl_load", 00:06:59.835 "bdev_ftl_create", 00:06:59.835 "bdev_virtio_attach_controller", 00:06:59.835 "bdev_virtio_scsi_get_devices", 00:06:59.835 "bdev_virtio_detach_controller", 00:06:59.835 "bdev_virtio_blk_set_hotplug", 00:06:59.835 "bdev_iscsi_delete", 00:06:59.835 "bdev_iscsi_create", 00:06:59.836 "bdev_iscsi_set_options", 00:06:59.836 "accel_error_inject_error", 00:06:59.836 "ioat_scan_accel_module", 00:06:59.836 "dsa_scan_accel_module", 00:06:59.836 "iaa_scan_accel_module", 00:06:59.836 "keyring_file_remove_key", 00:06:59.836 "keyring_file_add_key", 00:06:59.836 "keyring_linux_set_options", 00:06:59.836 "fsdev_aio_delete", 00:06:59.836 "fsdev_aio_create", 00:06:59.836 "iscsi_get_histogram", 00:06:59.836 "iscsi_enable_histogram", 00:06:59.836 "iscsi_set_options", 00:06:59.836 "iscsi_get_auth_groups", 00:06:59.836 "iscsi_auth_group_remove_secret", 00:06:59.836 "iscsi_auth_group_add_secret", 00:06:59.836 "iscsi_delete_auth_group", 00:06:59.836 "iscsi_create_auth_group", 00:06:59.836 "iscsi_set_discovery_auth", 00:06:59.836 "iscsi_get_options", 00:06:59.836 "iscsi_target_node_request_logout", 00:06:59.836 "iscsi_target_node_set_redirect", 00:06:59.836 "iscsi_target_node_set_auth", 00:06:59.836 "iscsi_target_node_add_lun", 00:06:59.836 "iscsi_get_stats", 00:06:59.836 "iscsi_get_connections", 00:06:59.836 "iscsi_portal_group_set_auth", 00:06:59.836 "iscsi_start_portal_group", 00:06:59.836 "iscsi_delete_portal_group", 00:06:59.836 "iscsi_create_portal_group", 00:06:59.836 "iscsi_get_portal_groups", 00:06:59.836 "iscsi_delete_target_node", 00:06:59.836 "iscsi_target_node_remove_pg_ig_maps", 00:06:59.836 "iscsi_target_node_add_pg_ig_maps", 00:06:59.836 "iscsi_create_target_node", 00:06:59.836 "iscsi_get_target_nodes", 00:06:59.836 "iscsi_delete_initiator_group", 00:06:59.836 "iscsi_initiator_group_remove_initiators", 00:06:59.836 "iscsi_initiator_group_add_initiators", 00:06:59.836 "iscsi_create_initiator_group", 00:06:59.836 "iscsi_get_initiator_groups", 00:06:59.836 "nvmf_set_crdt", 00:06:59.836 "nvmf_set_config", 00:06:59.836 "nvmf_set_max_subsystems", 00:06:59.836 "nvmf_stop_mdns_prr", 00:06:59.836 "nvmf_publish_mdns_prr", 00:06:59.836 "nvmf_subsystem_get_listeners", 00:06:59.836 "nvmf_subsystem_get_qpairs", 00:06:59.836 "nvmf_subsystem_get_controllers", 00:06:59.836 "nvmf_get_stats", 00:06:59.836 "nvmf_get_transports", 00:06:59.836 "nvmf_create_transport", 00:06:59.836 "nvmf_get_targets", 00:06:59.836 "nvmf_delete_target", 00:06:59.836 "nvmf_create_target", 00:06:59.836 "nvmf_subsystem_allow_any_host", 00:06:59.836 "nvmf_subsystem_set_keys", 00:06:59.836 "nvmf_subsystem_remove_host", 00:06:59.836 "nvmf_subsystem_add_host", 00:06:59.836 "nvmf_ns_remove_host", 00:06:59.836 "nvmf_ns_add_host", 00:06:59.836 "nvmf_subsystem_remove_ns", 00:06:59.836 "nvmf_subsystem_set_ns_ana_group", 00:06:59.836 "nvmf_subsystem_add_ns", 00:06:59.836 "nvmf_subsystem_listener_set_ana_state", 00:06:59.836 "nvmf_discovery_get_referrals", 00:06:59.836 "nvmf_discovery_remove_referral", 00:06:59.836 "nvmf_discovery_add_referral", 00:06:59.836 "nvmf_subsystem_remove_listener", 00:06:59.836 "nvmf_subsystem_add_listener", 00:06:59.836 "nvmf_delete_subsystem", 00:06:59.836 "nvmf_create_subsystem", 00:06:59.836 "nvmf_get_subsystems", 00:06:59.836 "env_dpdk_get_mem_stats", 00:06:59.836 "nbd_get_disks", 00:06:59.836 "nbd_stop_disk", 00:06:59.836 "nbd_start_disk", 00:06:59.836 "ublk_recover_disk", 00:06:59.836 "ublk_get_disks", 00:06:59.836 "ublk_stop_disk", 00:06:59.836 "ublk_start_disk", 00:06:59.836 "ublk_destroy_target", 00:06:59.836 "ublk_create_target", 00:06:59.836 "virtio_blk_create_transport", 00:06:59.836 "virtio_blk_get_transports", 00:06:59.836 "vhost_controller_set_coalescing", 00:06:59.836 "vhost_get_controllers", 00:06:59.836 "vhost_delete_controller", 00:06:59.836 "vhost_create_blk_controller", 00:06:59.836 "vhost_scsi_controller_remove_target", 00:06:59.836 "vhost_scsi_controller_add_target", 00:06:59.836 "vhost_start_scsi_controller", 00:06:59.836 "vhost_create_scsi_controller", 00:06:59.836 "thread_set_cpumask", 00:06:59.836 "scheduler_set_options", 00:06:59.836 "framework_get_governor", 00:06:59.836 "framework_get_scheduler", 00:06:59.836 "framework_set_scheduler", 00:06:59.836 "framework_get_reactors", 00:06:59.836 "thread_get_io_channels", 00:06:59.836 "thread_get_pollers", 00:06:59.836 "thread_get_stats", 00:06:59.836 "framework_monitor_context_switch", 00:06:59.836 "spdk_kill_instance", 00:06:59.836 "log_enable_timestamps", 00:06:59.836 "log_get_flags", 00:06:59.836 "log_clear_flag", 00:06:59.836 "log_set_flag", 00:06:59.836 "log_get_level", 00:06:59.836 "log_set_level", 00:06:59.836 "log_get_print_level", 00:06:59.836 "log_set_print_level", 00:06:59.836 "framework_enable_cpumask_locks", 00:06:59.836 "framework_disable_cpumask_locks", 00:06:59.836 "framework_wait_init", 00:06:59.836 "framework_start_init", 00:06:59.836 "scsi_get_devices", 00:06:59.836 "bdev_get_histogram", 00:06:59.836 "bdev_enable_histogram", 00:06:59.836 "bdev_set_qos_limit", 00:06:59.836 "bdev_set_qd_sampling_period", 00:06:59.836 "bdev_get_bdevs", 00:06:59.836 "bdev_reset_iostat", 00:06:59.836 "bdev_get_iostat", 00:06:59.836 "bdev_examine", 00:06:59.836 "bdev_wait_for_examine", 00:06:59.836 "bdev_set_options", 00:06:59.836 "accel_get_stats", 00:06:59.836 "accel_set_options", 00:06:59.836 "accel_set_driver", 00:06:59.836 "accel_crypto_key_destroy", 00:06:59.836 "accel_crypto_keys_get", 00:06:59.836 "accel_crypto_key_create", 00:06:59.836 "accel_assign_opc", 00:06:59.836 "accel_get_module_info", 00:06:59.836 "accel_get_opc_assignments", 00:06:59.836 "vmd_rescan", 00:06:59.836 "vmd_remove_device", 00:06:59.836 "vmd_enable", 00:06:59.836 "sock_get_default_impl", 00:06:59.836 "sock_set_default_impl", 00:06:59.836 "sock_impl_set_options", 00:06:59.836 "sock_impl_get_options", 00:06:59.836 "iobuf_get_stats", 00:06:59.836 "iobuf_set_options", 00:06:59.836 "keyring_get_keys", 00:06:59.836 "framework_get_pci_devices", 00:06:59.836 "framework_get_config", 00:06:59.836 "framework_get_subsystems", 00:06:59.836 "fsdev_set_opts", 00:06:59.836 "fsdev_get_opts", 00:06:59.836 "trace_get_info", 00:06:59.836 "trace_get_tpoint_group_mask", 00:06:59.836 "trace_disable_tpoint_group", 00:06:59.836 "trace_enable_tpoint_group", 00:06:59.836 "trace_clear_tpoint_mask", 00:06:59.836 "trace_set_tpoint_mask", 00:06:59.836 "notify_get_notifications", 00:06:59.836 "notify_get_types", 00:06:59.836 "spdk_get_version", 00:06:59.836 "rpc_get_methods" 00:06:59.836 ] 00:06:59.836 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:59.836 08:40:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57794 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57794 ']' 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57794 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57794 00:06:59.836 killing process with pid 57794 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57794' 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57794 00:06:59.836 08:40:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57794 00:07:02.368 00:07:02.368 real 0m4.214s 00:07:02.368 user 0m7.706s 00:07:02.368 sys 0m0.661s 00:07:02.368 08:40:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.368 08:40:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.368 ************************************ 00:07:02.368 END TEST spdkcli_tcp 00:07:02.368 ************************************ 00:07:02.368 08:40:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:02.368 08:40:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.368 08:40:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.368 08:40:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.368 ************************************ 00:07:02.368 START TEST dpdk_mem_utility 00:07:02.368 ************************************ 00:07:02.368 08:40:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:02.368 * Looking for test storage... 00:07:02.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:02.368 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.368 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.368 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.368 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.368 08:40:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.369 08:40:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.369 --rc genhtml_branch_coverage=1 00:07:02.369 --rc genhtml_function_coverage=1 00:07:02.369 --rc genhtml_legend=1 00:07:02.369 --rc geninfo_all_blocks=1 00:07:02.369 --rc geninfo_unexecuted_blocks=1 00:07:02.369 00:07:02.369 ' 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.369 --rc genhtml_branch_coverage=1 00:07:02.369 --rc genhtml_function_coverage=1 00:07:02.369 --rc genhtml_legend=1 00:07:02.369 --rc geninfo_all_blocks=1 00:07:02.369 --rc geninfo_unexecuted_blocks=1 00:07:02.369 00:07:02.369 ' 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.369 --rc genhtml_branch_coverage=1 00:07:02.369 --rc genhtml_function_coverage=1 00:07:02.369 --rc genhtml_legend=1 00:07:02.369 --rc geninfo_all_blocks=1 00:07:02.369 --rc geninfo_unexecuted_blocks=1 00:07:02.369 00:07:02.369 ' 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.369 --rc genhtml_branch_coverage=1 00:07:02.369 --rc genhtml_function_coverage=1 00:07:02.369 --rc genhtml_legend=1 00:07:02.369 --rc geninfo_all_blocks=1 00:07:02.369 --rc geninfo_unexecuted_blocks=1 00:07:02.369 00:07:02.369 ' 00:07:02.369 08:40:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:02.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.369 08:40:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57916 00:07:02.369 08:40:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57916 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57916 ']' 00:07:02.369 08:40:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.369 08:40:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:02.369 [2024-11-20 08:40:33.267731] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:02.369 [2024-11-20 08:40:33.267923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:07:02.628 [2024-11-20 08:40:33.454215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.885 [2024-11-20 08:40:33.591838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.822 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.822 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:03.822 08:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:03.822 08:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:03.822 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.822 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.822 { 00:07:03.822 "filename": "/tmp/spdk_mem_dump.txt" 00:07:03.822 } 00:07:03.822 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.822 08:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:03.822 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:03.822 1 heaps totaling size 816.000000 MiB 00:07:03.822 size: 816.000000 MiB heap id: 0 00:07:03.822 end heaps---------- 00:07:03.822 9 mempools totaling size 595.772034 MiB 00:07:03.822 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:03.822 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:03.822 size: 92.545471 MiB name: bdev_io_57916 00:07:03.822 size: 50.003479 MiB name: msgpool_57916 00:07:03.822 size: 36.509338 MiB name: fsdev_io_57916 00:07:03.822 size: 21.763794 MiB name: PDU_Pool 00:07:03.822 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:03.822 size: 4.133484 MiB name: evtpool_57916 00:07:03.822 size: 0.026123 MiB name: Session_Pool 00:07:03.822 end mempools------- 00:07:03.822 6 memzones totaling size 4.142822 MiB 00:07:03.822 size: 1.000366 MiB name: RG_ring_0_57916 00:07:03.822 size: 1.000366 MiB name: RG_ring_1_57916 00:07:03.822 size: 1.000366 MiB name: RG_ring_4_57916 00:07:03.822 size: 1.000366 MiB name: RG_ring_5_57916 00:07:03.822 size: 0.125366 MiB name: RG_ring_2_57916 00:07:03.822 size: 0.015991 MiB name: RG_ring_3_57916 00:07:03.822 end memzones------- 00:07:03.822 08:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:03.822 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:07:03.822 list of free elements. size: 16.790405 MiB 00:07:03.822 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:03.822 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:03.822 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:03.822 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:03.822 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:03.822 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:03.822 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:03.822 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:03.822 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:03.822 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:03.822 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:03.822 element at address: 0x20001ac00000 with size: 0.560730 MiB 00:07:03.822 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:03.822 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:03.822 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:03.822 element at address: 0x200012c00000 with size: 0.443481 MiB 00:07:03.822 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:03.822 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:03.822 list of standard malloc elements. size: 199.288696 MiB 00:07:03.822 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:03.822 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:03.822 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:03.822 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:03.822 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:03.822 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:03.822 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:03.823 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:03.823 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:03.823 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:03.823 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:03.823 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:03.823 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:03.823 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:03.824 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:03.824 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:03.824 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:03.824 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:03.825 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:03.825 list of memzone associated elements. size: 599.920898 MiB 00:07:03.825 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:03.825 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:03.825 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:03.825 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:03.825 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:03.825 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57916_0 00:07:03.825 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:03.825 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57916_0 00:07:03.825 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:03.825 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57916_0 00:07:03.825 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:03.825 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:03.825 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:03.825 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:03.825 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:03.825 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57916_0 00:07:03.825 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:03.825 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57916 00:07:03.825 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:03.825 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57916 00:07:03.825 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:03.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:03.825 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:03.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:03.825 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:03.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:03.825 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:03.825 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:03.825 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:03.825 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57916 00:07:03.825 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:03.825 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57916 00:07:03.825 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:03.825 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57916 00:07:03.825 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:03.825 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57916 00:07:03.825 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:03.825 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57916 00:07:03.825 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:03.825 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57916 00:07:03.825 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:03.825 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:03.825 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:03.825 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:03.825 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:03.825 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:03.825 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:03.825 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57916 00:07:03.825 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:03.825 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57916 00:07:03.825 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:03.825 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:03.825 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:03.825 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:03.825 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:03.825 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57916 00:07:03.825 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:03.825 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:03.825 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:03.825 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57916 00:07:03.825 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:03.825 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57916 00:07:03.825 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:03.825 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57916 00:07:03.825 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:03.825 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:03.825 08:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:03.825 08:40:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57916 00:07:03.825 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57916 ']' 00:07:03.825 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57916 00:07:03.825 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57916 00:07:03.826 killing process with pid 57916 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57916' 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57916 00:07:03.826 08:40:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57916 00:07:06.357 ************************************ 00:07:06.357 END TEST dpdk_mem_utility 00:07:06.357 ************************************ 00:07:06.357 00:07:06.357 real 0m4.012s 00:07:06.357 user 0m4.078s 00:07:06.357 sys 0m0.634s 00:07:06.357 08:40:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.357 08:40:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.357 08:40:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:06.357 08:40:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.357 08:40:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.357 08:40:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.357 ************************************ 00:07:06.357 START TEST event 00:07:06.357 ************************************ 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:06.357 * Looking for test storage... 00:07:06.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.357 08:40:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.357 08:40:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.357 08:40:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.357 08:40:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.357 08:40:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.357 08:40:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.357 08:40:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.357 08:40:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.357 08:40:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.357 08:40:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.357 08:40:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.357 08:40:37 event -- scripts/common.sh@344 -- # case "$op" in 00:07:06.357 08:40:37 event -- scripts/common.sh@345 -- # : 1 00:07:06.357 08:40:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.357 08:40:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.357 08:40:37 event -- scripts/common.sh@365 -- # decimal 1 00:07:06.357 08:40:37 event -- scripts/common.sh@353 -- # local d=1 00:07:06.357 08:40:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.357 08:40:37 event -- scripts/common.sh@355 -- # echo 1 00:07:06.357 08:40:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.357 08:40:37 event -- scripts/common.sh@366 -- # decimal 2 00:07:06.357 08:40:37 event -- scripts/common.sh@353 -- # local d=2 00:07:06.357 08:40:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.357 08:40:37 event -- scripts/common.sh@355 -- # echo 2 00:07:06.357 08:40:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.357 08:40:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.357 08:40:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.357 08:40:37 event -- scripts/common.sh@368 -- # return 0 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.357 --rc genhtml_branch_coverage=1 00:07:06.357 --rc genhtml_function_coverage=1 00:07:06.357 --rc genhtml_legend=1 00:07:06.357 --rc geninfo_all_blocks=1 00:07:06.357 --rc geninfo_unexecuted_blocks=1 00:07:06.357 00:07:06.357 ' 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.357 --rc genhtml_branch_coverage=1 00:07:06.357 --rc genhtml_function_coverage=1 00:07:06.357 --rc genhtml_legend=1 00:07:06.357 --rc geninfo_all_blocks=1 00:07:06.357 --rc geninfo_unexecuted_blocks=1 00:07:06.357 00:07:06.357 ' 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.357 --rc genhtml_branch_coverage=1 00:07:06.357 --rc genhtml_function_coverage=1 00:07:06.357 --rc genhtml_legend=1 00:07:06.357 --rc geninfo_all_blocks=1 00:07:06.357 --rc geninfo_unexecuted_blocks=1 00:07:06.357 00:07:06.357 ' 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.357 --rc genhtml_branch_coverage=1 00:07:06.357 --rc genhtml_function_coverage=1 00:07:06.357 --rc genhtml_legend=1 00:07:06.357 --rc geninfo_all_blocks=1 00:07:06.357 --rc geninfo_unexecuted_blocks=1 00:07:06.357 00:07:06.357 ' 00:07:06.357 08:40:37 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.357 08:40:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.357 08:40:37 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:06.357 08:40:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.357 08:40:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.357 ************************************ 00:07:06.357 START TEST event_perf 00:07:06.357 ************************************ 00:07:06.357 08:40:37 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.357 Running I/O for 1 seconds...[2024-11-20 08:40:37.238268] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:06.357 [2024-11-20 08:40:37.238622] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58024 ] 00:07:06.616 [2024-11-20 08:40:37.422634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.874 [2024-11-20 08:40:37.565138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.874 [2024-11-20 08:40:37.565276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.874 Running I/O for 1 seconds...[2024-11-20 08:40:37.565340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.874 [2024-11-20 08:40:37.565347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.250 00:07:08.250 lcore 0: 187443 00:07:08.250 lcore 1: 187443 00:07:08.250 lcore 2: 187444 00:07:08.250 lcore 3: 187442 00:07:08.250 done. 00:07:08.250 00:07:08.250 real 0m1.617s 00:07:08.250 user 0m4.365s 00:07:08.250 sys 0m0.123s 00:07:08.250 08:40:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.250 08:40:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.250 ************************************ 00:07:08.250 END TEST event_perf 00:07:08.250 ************************************ 00:07:08.250 08:40:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:08.250 08:40:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:08.250 08:40:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.250 08:40:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.250 ************************************ 00:07:08.250 START TEST event_reactor 00:07:08.250 ************************************ 00:07:08.250 08:40:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:08.250 [2024-11-20 08:40:38.895628] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:08.250 [2024-11-20 08:40:38.895768] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:07:08.250 [2024-11-20 08:40:39.075769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.511 [2024-11-20 08:40:39.205536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.888 test_start 00:07:09.888 oneshot 00:07:09.889 tick 100 00:07:09.889 tick 100 00:07:09.889 tick 250 00:07:09.889 tick 100 00:07:09.889 tick 100 00:07:09.889 tick 100 00:07:09.889 tick 250 00:07:09.889 tick 500 00:07:09.889 tick 100 00:07:09.889 tick 100 00:07:09.889 tick 250 00:07:09.889 tick 100 00:07:09.889 tick 100 00:07:09.889 test_end 00:07:09.889 00:07:09.889 real 0m1.572s 00:07:09.889 user 0m1.366s 00:07:09.889 sys 0m0.098s 00:07:09.889 08:40:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.889 ************************************ 00:07:09.889 END TEST event_reactor 00:07:09.889 ************************************ 00:07:09.889 08:40:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:09.889 08:40:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.889 08:40:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:09.889 08:40:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.889 08:40:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.889 ************************************ 00:07:09.889 START TEST event_reactor_perf 00:07:09.889 ************************************ 00:07:09.889 08:40:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.889 [2024-11-20 08:40:40.516604] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:09.889 [2024-11-20 08:40:40.516918] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58100 ] 00:07:09.889 [2024-11-20 08:40:40.689475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.146 [2024-11-20 08:40:40.817992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.157 test_start 00:07:11.157 test_end 00:07:11.157 Performance: 282766 events per second 00:07:11.157 00:07:11.157 real 0m1.565s 00:07:11.157 user 0m1.368s 00:07:11.157 sys 0m0.087s 00:07:11.157 08:40:42 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.157 08:40:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.157 ************************************ 00:07:11.157 END TEST event_reactor_perf 00:07:11.157 ************************************ 00:07:11.416 08:40:42 event -- event/event.sh@49 -- # uname -s 00:07:11.416 08:40:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:11.416 08:40:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:11.416 08:40:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.416 08:40:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.416 08:40:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.416 ************************************ 00:07:11.416 START TEST event_scheduler 00:07:11.416 ************************************ 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:11.417 * Looking for test storage... 00:07:11.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.417 08:40:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.417 --rc genhtml_branch_coverage=1 00:07:11.417 --rc genhtml_function_coverage=1 00:07:11.417 --rc genhtml_legend=1 00:07:11.417 --rc geninfo_all_blocks=1 00:07:11.417 --rc geninfo_unexecuted_blocks=1 00:07:11.417 00:07:11.417 ' 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.417 --rc genhtml_branch_coverage=1 00:07:11.417 --rc genhtml_function_coverage=1 00:07:11.417 --rc genhtml_legend=1 00:07:11.417 --rc geninfo_all_blocks=1 00:07:11.417 --rc geninfo_unexecuted_blocks=1 00:07:11.417 00:07:11.417 ' 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.417 --rc genhtml_branch_coverage=1 00:07:11.417 --rc genhtml_function_coverage=1 00:07:11.417 --rc genhtml_legend=1 00:07:11.417 --rc geninfo_all_blocks=1 00:07:11.417 --rc geninfo_unexecuted_blocks=1 00:07:11.417 00:07:11.417 ' 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.417 --rc genhtml_branch_coverage=1 00:07:11.417 --rc genhtml_function_coverage=1 00:07:11.417 --rc genhtml_legend=1 00:07:11.417 --rc geninfo_all_blocks=1 00:07:11.417 --rc geninfo_unexecuted_blocks=1 00:07:11.417 00:07:11.417 ' 00:07:11.417 08:40:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:11.417 08:40:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58171 00:07:11.417 08:40:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:11.417 08:40:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.417 08:40:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58171 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58171 ']' 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.417 08:40:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.676 [2024-11-20 08:40:42.398386] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:11.676 [2024-11-20 08:40:42.398946] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:07:11.935 [2024-11-20 08:40:42.606025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.935 [2024-11-20 08:40:42.770182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.935 [2024-11-20 08:40:42.770306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.935 [2024-11-20 08:40:42.770450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.935 [2024-11-20 08:40:42.770460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:12.872 08:40:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.872 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.872 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.872 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.872 POWER: Cannot set governor of lcore 0 to performance 00:07:12.872 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.872 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.872 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.872 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.872 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:12.872 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:12.872 POWER: Unable to set Power Management Environment for lcore 0 00:07:12.872 [2024-11-20 08:40:43.457598] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:12.872 [2024-11-20 08:40:43.457627] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:12.872 [2024-11-20 08:40:43.457643] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:12.872 [2024-11-20 08:40:43.457669] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:12.872 [2024-11-20 08:40:43.457682] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:12.872 [2024-11-20 08:40:43.457696] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.872 08:40:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.872 [2024-11-20 08:40:43.781575] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.872 08:40:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.872 08:40:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 ************************************ 00:07:13.131 START TEST scheduler_create_thread 00:07:13.131 ************************************ 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 2 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 3 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 4 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 5 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 6 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 7 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 8 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 9 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 10 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.131 08:40:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.507 08:40:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.507 08:40:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:14.507 08:40:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:14.507 08:40:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.507 08:40:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.883 ************************************ 00:07:15.883 END TEST scheduler_create_thread 00:07:15.883 ************************************ 00:07:15.883 08:40:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.883 00:07:15.883 real 0m2.615s 00:07:15.883 user 0m0.016s 00:07:15.883 sys 0m0.005s 00:07:15.883 08:40:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.883 08:40:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.883 08:40:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:15.883 08:40:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58171 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58171 ']' 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58171 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58171 00:07:15.883 killing process with pid 58171 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58171' 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58171 00:07:15.883 08:40:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58171 00:07:16.141 [2024-11-20 08:40:46.886644] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:17.078 00:07:17.078 real 0m5.865s 00:07:17.078 user 0m10.385s 00:07:17.078 sys 0m0.517s 00:07:17.078 08:40:47 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.078 08:40:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.078 ************************************ 00:07:17.078 END TEST event_scheduler 00:07:17.078 ************************************ 00:07:17.337 08:40:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:17.337 08:40:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:17.337 08:40:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.337 08:40:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.337 08:40:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 ************************************ 00:07:17.337 START TEST app_repeat 00:07:17.337 ************************************ 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:17.337 Process app_repeat pid: 58282 00:07:17.337 spdk_app_start Round 0 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58282 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58282' 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:17.337 08:40:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58282 /var/tmp/spdk-nbd.sock 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58282 ']' 00:07:17.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.337 08:40:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.337 [2024-11-20 08:40:48.065983] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:17.337 [2024-11-20 08:40:48.066136] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:07:17.337 [2024-11-20 08:40:48.238470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.596 [2024-11-20 08:40:48.395427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.596 [2024-11-20 08:40:48.395438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.531 08:40:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.531 08:40:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.531 08:40:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.531 Malloc0 00:07:18.531 08:40:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.097 Malloc1 00:07:19.097 08:40:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.097 08:40:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:19.356 /dev/nbd0 00:07:19.356 08:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.356 08:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.356 1+0 records in 00:07:19.356 1+0 records out 00:07:19.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318006 s, 12.9 MB/s 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.356 08:40:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.356 08:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.356 08:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.356 08:40:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.616 /dev/nbd1 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.616 1+0 records in 00:07:19.616 1+0 records out 00:07:19.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034558 s, 11.9 MB/s 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.616 08:40:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.616 08:40:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.875 08:40:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.875 { 00:07:19.875 "nbd_device": "/dev/nbd0", 00:07:19.875 "bdev_name": "Malloc0" 00:07:19.875 }, 00:07:19.875 { 00:07:19.875 "nbd_device": "/dev/nbd1", 00:07:19.875 "bdev_name": "Malloc1" 00:07:19.875 } 00:07:19.875 ]' 00:07:19.875 08:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.875 { 00:07:19.875 "nbd_device": "/dev/nbd0", 00:07:19.875 "bdev_name": "Malloc0" 00:07:19.875 }, 00:07:19.875 { 00:07:19.875 "nbd_device": "/dev/nbd1", 00:07:19.875 "bdev_name": "Malloc1" 00:07:19.875 } 00:07:19.875 ]' 00:07:19.875 08:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.135 /dev/nbd1' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.135 /dev/nbd1' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.135 256+0 records in 00:07:20.135 256+0 records out 00:07:20.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00800088 s, 131 MB/s 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.135 256+0 records in 00:07:20.135 256+0 records out 00:07:20.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282519 s, 37.1 MB/s 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.135 256+0 records in 00:07:20.135 256+0 records out 00:07:20.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292699 s, 35.8 MB/s 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.135 08:40:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.393 08:40:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.652 08:40:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.910 08:40:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.910 08:40:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.910 08:40:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.168 08:40:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.168 08:40:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:21.426 08:40:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:22.802 [2024-11-20 08:40:53.346218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.802 [2024-11-20 08:40:53.472890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.802 [2024-11-20 08:40:53.472900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.802 [2024-11-20 08:40:53.661571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:22.802 [2024-11-20 08:40:53.661656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.705 08:40:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.705 spdk_app_start Round 1 00:07:24.705 08:40:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:24.705 08:40:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58282 /var/tmp/spdk-nbd.sock 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58282 ']' 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.705 08:40:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:24.705 08:40:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.273 Malloc0 00:07:25.273 08:40:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.532 Malloc1 00:07:25.532 08:40:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.532 08:40:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:25.790 /dev/nbd0 00:07:25.790 08:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:25.790 08:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.790 1+0 records in 00:07:25.790 1+0 records out 00:07:25.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288396 s, 14.2 MB/s 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.790 08:40:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:25.790 08:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.790 08:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.791 08:40:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:26.049 /dev/nbd1 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.049 1+0 records in 00:07:26.049 1+0 records out 00:07:26.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346149 s, 11.8 MB/s 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:26.049 08:40:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.049 08:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.617 { 00:07:26.617 "nbd_device": "/dev/nbd0", 00:07:26.617 "bdev_name": "Malloc0" 00:07:26.617 }, 00:07:26.617 { 00:07:26.617 "nbd_device": "/dev/nbd1", 00:07:26.617 "bdev_name": "Malloc1" 00:07:26.617 } 00:07:26.617 ]' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.617 { 00:07:26.617 "nbd_device": "/dev/nbd0", 00:07:26.617 "bdev_name": "Malloc0" 00:07:26.617 }, 00:07:26.617 { 00:07:26.617 "nbd_device": "/dev/nbd1", 00:07:26.617 "bdev_name": "Malloc1" 00:07:26.617 } 00:07:26.617 ]' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.617 /dev/nbd1' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.617 /dev/nbd1' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:26.617 256+0 records in 00:07:26.617 256+0 records out 00:07:26.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074407 s, 141 MB/s 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.617 256+0 records in 00:07:26.617 256+0 records out 00:07:26.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258992 s, 40.5 MB/s 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.617 256+0 records in 00:07:26.617 256+0 records out 00:07:26.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349727 s, 30.0 MB/s 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.617 08:40:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.875 08:40:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.134 08:40:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.702 08:40:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.702 08:40:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.274 08:40:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:29.207 [2024-11-20 08:40:59.957462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.207 [2024-11-20 08:41:00.085300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.207 [2024-11-20 08:41:00.085303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.466 [2024-11-20 08:41:00.274684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.466 [2024-11-20 08:41:00.274780] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.369 08:41:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.369 spdk_app_start Round 2 00:07:31.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.369 08:41:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:31.369 08:41:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58282 /var/tmp/spdk-nbd.sock 00:07:31.369 08:41:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58282 ']' 00:07:31.369 08:41:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.369 08:41:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.369 08:41:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.369 08:41:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.369 08:41:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.627 08:41:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.627 08:41:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:31.627 08:41:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.885 Malloc0 00:07:31.885 08:41:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.144 Malloc1 00:07:32.144 08:41:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.144 08:41:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.711 /dev/nbd0 00:07:32.711 08:41:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.711 08:41:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.711 1+0 records in 00:07:32.711 1+0 records out 00:07:32.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035677 s, 11.5 MB/s 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:32.711 08:41:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:32.711 08:41:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.711 08:41:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.711 08:41:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.969 /dev/nbd1 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.969 1+0 records in 00:07:32.969 1+0 records out 00:07:32.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294633 s, 13.9 MB/s 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:32.969 08:41:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.969 08:41:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:33.228 { 00:07:33.228 "nbd_device": "/dev/nbd0", 00:07:33.228 "bdev_name": "Malloc0" 00:07:33.228 }, 00:07:33.228 { 00:07:33.228 "nbd_device": "/dev/nbd1", 00:07:33.228 "bdev_name": "Malloc1" 00:07:33.228 } 00:07:33.228 ]' 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:33.228 { 00:07:33.228 "nbd_device": "/dev/nbd0", 00:07:33.228 "bdev_name": "Malloc0" 00:07:33.228 }, 00:07:33.228 { 00:07:33.228 "nbd_device": "/dev/nbd1", 00:07:33.228 "bdev_name": "Malloc1" 00:07:33.228 } 00:07:33.228 ]' 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:33.228 /dev/nbd1' 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:33.228 /dev/nbd1' 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:33.228 08:41:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:33.487 256+0 records in 00:07:33.487 256+0 records out 00:07:33.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106782 s, 98.2 MB/s 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.487 256+0 records in 00:07:33.487 256+0 records out 00:07:33.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291132 s, 36.0 MB/s 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.487 256+0 records in 00:07:33.487 256+0 records out 00:07:33.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280032 s, 37.4 MB/s 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.487 08:41:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.746 08:41:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.747 08:41:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.747 08:41:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.747 08:41:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.006 08:41:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:34.573 08:41:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:34.573 08:41:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:34.832 08:41:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:36.207 [2024-11-20 08:41:06.759824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.207 [2024-11-20 08:41:06.886935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.207 [2024-11-20 08:41:06.886945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.207 [2024-11-20 08:41:07.076759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.207 [2024-11-20 08:41:07.076854] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:38.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.109 08:41:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58282 /var/tmp/spdk-nbd.sock 00:07:38.109 08:41:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58282 ']' 00:07:38.109 08:41:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.109 08:41:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.109 08:41:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.109 08:41:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.109 08:41:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:38.368 08:41:09 event.app_repeat -- event/event.sh@39 -- # killprocess 58282 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58282 ']' 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58282 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58282 00:07:38.368 killing process with pid 58282 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58282' 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58282 00:07:38.368 08:41:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58282 00:07:39.303 spdk_app_start is called in Round 0. 00:07:39.303 Shutdown signal received, stop current app iteration 00:07:39.303 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:07:39.303 spdk_app_start is called in Round 1. 00:07:39.303 Shutdown signal received, stop current app iteration 00:07:39.303 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:07:39.303 spdk_app_start is called in Round 2. 00:07:39.303 Shutdown signal received, stop current app iteration 00:07:39.303 Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 reinitialization... 00:07:39.303 spdk_app_start is called in Round 3. 00:07:39.303 Shutdown signal received, stop current app iteration 00:07:39.303 08:41:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:39.303 08:41:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:39.303 00:07:39.303 real 0m21.990s 00:07:39.303 user 0m48.986s 00:07:39.303 sys 0m3.177s 00:07:39.303 08:41:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.303 ************************************ 00:07:39.303 END TEST app_repeat 00:07:39.303 ************************************ 00:07:39.303 08:41:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 08:41:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:39.303 08:41:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:39.303 08:41:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.303 08:41:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.303 08:41:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.303 ************************************ 00:07:39.303 START TEST cpu_locks 00:07:39.303 ************************************ 00:07:39.303 08:41:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:39.303 * Looking for test storage... 00:07:39.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:39.303 08:41:10 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.303 08:41:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.303 08:41:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.303 08:41:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.303 08:41:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.303 08:41:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.303 08:41:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.303 08:41:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.303 08:41:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.303 08:41:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.562 08:41:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.562 --rc genhtml_branch_coverage=1 00:07:39.562 --rc genhtml_function_coverage=1 00:07:39.562 --rc genhtml_legend=1 00:07:39.562 --rc geninfo_all_blocks=1 00:07:39.562 --rc geninfo_unexecuted_blocks=1 00:07:39.562 00:07:39.562 ' 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.562 --rc genhtml_branch_coverage=1 00:07:39.562 --rc genhtml_function_coverage=1 00:07:39.562 --rc genhtml_legend=1 00:07:39.562 --rc geninfo_all_blocks=1 00:07:39.562 --rc geninfo_unexecuted_blocks=1 00:07:39.562 00:07:39.562 ' 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.562 --rc genhtml_branch_coverage=1 00:07:39.562 --rc genhtml_function_coverage=1 00:07:39.562 --rc genhtml_legend=1 00:07:39.562 --rc geninfo_all_blocks=1 00:07:39.562 --rc geninfo_unexecuted_blocks=1 00:07:39.562 00:07:39.562 ' 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.562 --rc genhtml_branch_coverage=1 00:07:39.562 --rc genhtml_function_coverage=1 00:07:39.562 --rc genhtml_legend=1 00:07:39.562 --rc geninfo_all_blocks=1 00:07:39.562 --rc geninfo_unexecuted_blocks=1 00:07:39.562 00:07:39.562 ' 00:07:39.562 08:41:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:39.562 08:41:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:39.562 08:41:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:39.562 08:41:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.562 08:41:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.562 ************************************ 00:07:39.562 START TEST default_locks 00:07:39.562 ************************************ 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58757 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58757 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58757 ']' 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.562 08:41:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.562 [2024-11-20 08:41:10.377418] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:39.563 [2024-11-20 08:41:10.377612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58757 ] 00:07:39.821 [2024-11-20 08:41:10.561540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.821 [2024-11-20 08:41:10.696665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.757 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.757 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:40.757 08:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58757 00:07:40.757 08:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58757 00:07:40.757 08:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:41.323 08:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58757 00:07:41.323 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58757 ']' 00:07:41.323 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58757 00:07:41.323 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:41.323 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.323 08:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58757 00:07:41.323 08:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.323 killing process with pid 58757 00:07:41.323 08:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.323 08:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58757' 00:07:41.323 08:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58757 00:07:41.323 08:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58757 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58757 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58757 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58757 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58757 ']' 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.880 ERROR: process (pid: 58757) is no longer running 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58757) - No such process 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.880 00:07:43.880 real 0m4.024s 00:07:43.880 user 0m3.995s 00:07:43.880 sys 0m0.749s 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.880 08:41:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.880 ************************************ 00:07:43.880 END TEST default_locks 00:07:43.880 ************************************ 00:07:43.880 08:41:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:43.880 08:41:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.880 08:41:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.880 08:41:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.880 ************************************ 00:07:43.880 START TEST default_locks_via_rpc 00:07:43.880 ************************************ 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58834 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58834 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58834 ']' 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.880 08:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.880 [2024-11-20 08:41:14.454839] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:43.880 [2024-11-20 08:41:14.455000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58834 ] 00:07:43.880 [2024-11-20 08:41:14.635700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.880 [2024-11-20 08:41:14.791127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58834 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58834 00:07:44.815 08:41:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58834 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58834 ']' 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58834 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58834 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.381 killing process with pid 58834 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58834' 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58834 00:07:45.381 08:41:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58834 00:07:47.945 00:07:47.945 real 0m3.986s 00:07:47.945 user 0m4.088s 00:07:47.945 sys 0m0.730s 00:07:47.945 08:41:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.945 ************************************ 00:07:47.945 END TEST default_locks_via_rpc 00:07:47.945 ************************************ 00:07:47.945 08:41:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 08:41:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:47.945 08:41:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.945 08:41:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.945 08:41:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 ************************************ 00:07:47.945 START TEST non_locking_app_on_locked_coremask 00:07:47.945 ************************************ 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58908 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58908 /var/tmp/spdk.sock 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58908 ']' 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.945 08:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 [2024-11-20 08:41:18.486868] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:47.945 [2024-11-20 08:41:18.487068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:07:47.945 [2024-11-20 08:41:18.669793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.945 [2024-11-20 08:41:18.800069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58924 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58924 /var/tmp/spdk2.sock 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58924 ']' 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.879 08:41:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.879 [2024-11-20 08:41:19.779774] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:48.879 [2024-11-20 08:41:19.780007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58924 ] 00:07:49.137 [2024-11-20 08:41:19.984726] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.137 [2024-11-20 08:41:19.984816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.395 [2024-11-20 08:41:20.242505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.949 08:41:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.949 08:41:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:51.949 08:41:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58908 00:07:51.949 08:41:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58908 00:07:51.949 08:41:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.518 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58908 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58908 ']' 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58908 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58908 00:07:52.519 killing process with pid 58908 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58908' 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58908 00:07:52.519 08:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58908 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58924 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58924 ']' 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58924 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58924 00:07:57.784 killing process with pid 58924 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58924' 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58924 00:07:57.784 08:41:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58924 00:07:59.159 ************************************ 00:07:59.159 END TEST non_locking_app_on_locked_coremask 00:07:59.159 ************************************ 00:07:59.159 00:07:59.159 real 0m11.703s 00:07:59.159 user 0m12.409s 00:07:59.159 sys 0m1.485s 00:07:59.159 08:41:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.159 08:41:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 08:41:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:59.417 08:41:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.417 08:41:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.417 08:41:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 ************************************ 00:07:59.417 START TEST locking_app_on_unlocked_coremask 00:07:59.417 ************************************ 00:07:59.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59077 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59077 /var/tmp/spdk.sock 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59077 ']' 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.417 08:41:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 [2024-11-20 08:41:30.218594] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:07:59.417 [2024-11-20 08:41:30.219004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:07:59.675 [2024-11-20 08:41:30.390582] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.675 [2024-11-20 08:41:30.390804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.675 [2024-11-20 08:41:30.520013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59099 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59099 /var/tmp/spdk2.sock 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59099 ']' 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.609 08:41:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.609 [2024-11-20 08:41:31.510470] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:00.609 [2024-11-20 08:41:31.510956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59099 ] 00:08:00.923 [2024-11-20 08:41:31.723118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.181 [2024-11-20 08:41:31.986812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.713 08:41:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.713 08:41:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:03.713 08:41:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59099 00:08:03.713 08:41:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59099 00:08:03.713 08:41:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59077 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59077 ']' 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59077 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59077 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.280 killing process with pid 59077 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59077' 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59077 00:08:04.280 08:41:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59077 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59099 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59099 ']' 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59099 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59099 00:08:09.608 killing process with pid 59099 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59099' 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59099 00:08:09.608 08:41:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59099 00:08:10.984 ************************************ 00:08:10.984 END TEST locking_app_on_unlocked_coremask 00:08:10.984 ************************************ 00:08:10.984 00:08:10.984 real 0m11.714s 00:08:10.984 user 0m12.371s 00:08:10.984 sys 0m1.440s 00:08:10.984 08:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.984 08:41:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.984 08:41:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:10.984 08:41:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.984 08:41:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.984 08:41:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.984 ************************************ 00:08:10.984 START TEST locking_app_on_locked_coremask 00:08:10.984 ************************************ 00:08:10.984 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:10.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.984 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59247 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59247 /var/tmp/spdk.sock 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59247 ']' 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.985 08:41:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.245 [2024-11-20 08:41:41.997457] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:11.245 [2024-11-20 08:41:41.997641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:08:11.504 [2024-11-20 08:41:42.188182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.504 [2024-11-20 08:41:42.348306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59268 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59268 /var/tmp/spdk2.sock 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59268 /var/tmp/spdk2.sock 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:12.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59268 /var/tmp/spdk2.sock 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59268 ']' 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.466 08:41:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.466 [2024-11-20 08:41:43.346162] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:12.466 [2024-11-20 08:41:43.346570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59268 ] 00:08:12.726 [2024-11-20 08:41:43.542750] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59247 has claimed it. 00:08:12.726 [2024-11-20 08:41:43.542858] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:13.294 ERROR: process (pid: 59268) is no longer running 00:08:13.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59268) - No such process 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59247 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59247 00:08:13.294 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59247 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59247 ']' 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59247 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59247 00:08:13.862 killing process with pid 59247 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59247' 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59247 00:08:13.862 08:41:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59247 00:08:16.395 ************************************ 00:08:16.395 END TEST locking_app_on_locked_coremask 00:08:16.395 ************************************ 00:08:16.395 00:08:16.395 real 0m4.865s 00:08:16.395 user 0m5.167s 00:08:16.395 sys 0m0.886s 00:08:16.395 08:41:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.395 08:41:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.395 08:41:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:16.395 08:41:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.395 08:41:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.395 08:41:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.395 ************************************ 00:08:16.395 START TEST locking_overlapped_coremask 00:08:16.395 ************************************ 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:16.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59338 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59338 /var/tmp/spdk.sock 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.395 08:41:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.395 [2024-11-20 08:41:46.908847] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:16.395 [2024-11-20 08:41:46.909280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:08:16.395 [2024-11-20 08:41:47.097911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.395 [2024-11-20 08:41:47.252388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.395 [2024-11-20 08:41:47.252455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.395 [2024-11-20 08:41:47.252471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59356 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59356 /var/tmp/spdk2.sock 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59356 /var/tmp/spdk2.sock 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59356 /var/tmp/spdk2.sock 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59356 ']' 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.331 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.590 [2024-11-20 08:41:48.250405] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:17.590 [2024-11-20 08:41:48.250580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59356 ] 00:08:17.590 [2024-11-20 08:41:48.457685] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59338 has claimed it. 00:08:17.590 [2024-11-20 08:41:48.457791] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:18.157 ERROR: process (pid: 59356) is no longer running 00:08:18.157 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59356) - No such process 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59338 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59338 ']' 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59338 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59338 00:08:18.157 killing process with pid 59338 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59338' 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59338 00:08:18.157 08:41:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59338 00:08:20.708 ************************************ 00:08:20.708 END TEST locking_overlapped_coremask 00:08:20.708 ************************************ 00:08:20.708 00:08:20.708 real 0m4.396s 00:08:20.708 user 0m11.946s 00:08:20.708 sys 0m0.699s 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.708 08:41:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:20.708 08:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.708 08:41:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.708 08:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.708 ************************************ 00:08:20.708 START TEST locking_overlapped_coremask_via_rpc 00:08:20.708 ************************************ 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59420 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59420 /var/tmp/spdk.sock 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:20.708 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59420 ']' 00:08:20.709 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.709 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.709 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.709 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.709 08:41:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.709 [2024-11-20 08:41:51.335003] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:20.709 [2024-11-20 08:41:51.335180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59420 ] 00:08:20.709 [2024-11-20 08:41:51.514668] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.709 [2024-11-20 08:41:51.514749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.967 [2024-11-20 08:41:51.677432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.968 [2024-11-20 08:41:51.677586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.968 [2024-11-20 08:41:51.677595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.903 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.903 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59443 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59443 /var/tmp/spdk2.sock 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59443 ']' 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.904 08:41:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.904 [2024-11-20 08:41:52.683397] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:21.904 [2024-11-20 08:41:52.683601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:08:22.162 [2024-11-20 08:41:52.886587] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.162 [2024-11-20 08:41:52.890230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.420 [2024-11-20 08:41:53.160778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.420 [2024-11-20 08:41:53.160901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.420 [2024-11-20 08:41:53.161091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.002 [2024-11-20 08:41:55.533381] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59420 has claimed it. 00:08:25.002 request: 00:08:25.002 { 00:08:25.002 "method": "framework_enable_cpumask_locks", 00:08:25.002 "req_id": 1 00:08:25.002 } 00:08:25.002 Got JSON-RPC error response 00:08:25.002 response: 00:08:25.002 { 00:08:25.002 "code": -32603, 00:08:25.002 "message": "Failed to claim CPU core: 2" 00:08:25.002 } 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:25.002 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59420 /var/tmp/spdk.sock 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59420 ']' 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59443 /var/tmp/spdk2.sock 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59443 ']' 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.003 08:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:25.569 00:08:25.569 real 0m5.008s 00:08:25.569 user 0m1.941s 00:08:25.569 sys 0m0.240s 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.569 08:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.569 ************************************ 00:08:25.569 END TEST locking_overlapped_coremask_via_rpc 00:08:25.569 ************************************ 00:08:25.569 08:41:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:25.569 08:41:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59420 ]] 00:08:25.569 08:41:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59420 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59420 ']' 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59420 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59420 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.569 killing process with pid 59420 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59420' 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59420 00:08:25.569 08:41:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59420 00:08:28.100 08:41:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59443 ]] 00:08:28.100 08:41:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59443 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59443 ']' 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59443 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59443 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:28.100 killing process with pid 59443 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59443' 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59443 00:08:28.100 08:41:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59443 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59420 ]] 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59420 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59420 ']' 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59420 00:08:30.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59420) - No such process 00:08:30.001 Process with pid 59420 is not found 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59420 is not found' 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59443 ]] 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59443 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59443 ']' 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59443 00:08:30.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59443) - No such process 00:08:30.001 Process with pid 59443 is not found 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59443 is not found' 00:08:30.001 08:42:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:30.001 00:08:30.001 real 0m50.762s 00:08:30.001 user 1m28.819s 00:08:30.001 sys 0m7.443s 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.001 ************************************ 00:08:30.001 END TEST cpu_locks 00:08:30.001 ************************************ 00:08:30.001 08:42:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 00:08:30.001 real 1m23.847s 00:08:30.001 user 2m35.494s 00:08:30.001 sys 0m11.707s 00:08:30.001 08:42:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.001 08:42:00 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 ************************************ 00:08:30.001 END TEST event 00:08:30.001 ************************************ 00:08:30.001 08:42:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:30.001 08:42:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.001 08:42:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.001 08:42:00 -- common/autotest_common.sh@10 -- # set +x 00:08:30.001 ************************************ 00:08:30.001 START TEST thread 00:08:30.001 ************************************ 00:08:30.001 08:42:00 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:30.260 * Looking for test storage... 00:08:30.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:30.260 08:42:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.260 08:42:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.260 08:42:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.260 08:42:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.260 08:42:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.260 08:42:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.260 08:42:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.260 08:42:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.260 08:42:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.260 08:42:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.260 08:42:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.260 08:42:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.260 08:42:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.260 08:42:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.260 08:42:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:30.260 08:42:01 thread -- scripts/common.sh@345 -- # : 1 00:08:30.260 08:42:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.260 08:42:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.260 08:42:01 thread -- scripts/common.sh@365 -- # decimal 1 00:08:30.260 08:42:01 thread -- scripts/common.sh@353 -- # local d=1 00:08:30.260 08:42:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.260 08:42:01 thread -- scripts/common.sh@355 -- # echo 1 00:08:30.260 08:42:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.260 08:42:01 thread -- scripts/common.sh@366 -- # decimal 2 00:08:30.260 08:42:01 thread -- scripts/common.sh@353 -- # local d=2 00:08:30.260 08:42:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.260 08:42:01 thread -- scripts/common.sh@355 -- # echo 2 00:08:30.260 08:42:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.260 08:42:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.260 08:42:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.260 08:42:01 thread -- scripts/common.sh@368 -- # return 0 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.260 --rc genhtml_branch_coverage=1 00:08:30.260 --rc genhtml_function_coverage=1 00:08:30.260 --rc genhtml_legend=1 00:08:30.260 --rc geninfo_all_blocks=1 00:08:30.260 --rc geninfo_unexecuted_blocks=1 00:08:30.260 00:08:30.260 ' 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.260 --rc genhtml_branch_coverage=1 00:08:30.260 --rc genhtml_function_coverage=1 00:08:30.260 --rc genhtml_legend=1 00:08:30.260 --rc geninfo_all_blocks=1 00:08:30.260 --rc geninfo_unexecuted_blocks=1 00:08:30.260 00:08:30.260 ' 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.260 --rc genhtml_branch_coverage=1 00:08:30.260 --rc genhtml_function_coverage=1 00:08:30.260 --rc genhtml_legend=1 00:08:30.260 --rc geninfo_all_blocks=1 00:08:30.260 --rc geninfo_unexecuted_blocks=1 00:08:30.260 00:08:30.260 ' 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.260 --rc genhtml_branch_coverage=1 00:08:30.260 --rc genhtml_function_coverage=1 00:08:30.260 --rc genhtml_legend=1 00:08:30.260 --rc geninfo_all_blocks=1 00:08:30.260 --rc geninfo_unexecuted_blocks=1 00:08:30.260 00:08:30.260 ' 00:08:30.260 08:42:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.260 08:42:01 thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.260 ************************************ 00:08:30.260 START TEST thread_poller_perf 00:08:30.260 ************************************ 00:08:30.260 08:42:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.260 [2024-11-20 08:42:01.133916] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:30.260 [2024-11-20 08:42:01.134365] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59644 ] 00:08:30.519 [2024-11-20 08:42:01.323877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.840 [2024-11-20 08:42:01.478265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.840 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:32.215 [2024-11-20T08:42:03.131Z] ====================================== 00:08:32.215 [2024-11-20T08:42:03.131Z] busy:2210770950 (cyc) 00:08:32.215 [2024-11-20T08:42:03.131Z] total_run_count: 297000 00:08:32.215 [2024-11-20T08:42:03.131Z] tsc_hz: 2200000000 (cyc) 00:08:32.215 [2024-11-20T08:42:03.131Z] ====================================== 00:08:32.215 [2024-11-20T08:42:03.131Z] poller_cost: 7443 (cyc), 3383 (nsec) 00:08:32.215 00:08:32.215 ************************************ 00:08:32.216 END TEST thread_poller_perf 00:08:32.216 ************************************ 00:08:32.216 real 0m1.632s 00:08:32.216 user 0m1.420s 00:08:32.216 sys 0m0.101s 00:08:32.216 08:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.216 08:42:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 08:42:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:32.216 08:42:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:32.216 08:42:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.216 08:42:02 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 ************************************ 00:08:32.216 START TEST thread_poller_perf 00:08:32.216 ************************************ 00:08:32.216 08:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:32.216 [2024-11-20 08:42:02.807230] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:32.216 [2024-11-20 08:42:02.807612] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59675 ] 00:08:32.216 [2024-11-20 08:42:02.988668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.216 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:32.216 [2024-11-20 08:42:03.120622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.591 [2024-11-20T08:42:04.507Z] ====================================== 00:08:33.591 [2024-11-20T08:42:04.507Z] busy:2204152795 (cyc) 00:08:33.591 [2024-11-20T08:42:04.507Z] total_run_count: 3804000 00:08:33.591 [2024-11-20T08:42:04.507Z] tsc_hz: 2200000000 (cyc) 00:08:33.591 [2024-11-20T08:42:04.507Z] ====================================== 00:08:33.591 [2024-11-20T08:42:04.507Z] poller_cost: 579 (cyc), 263 (nsec) 00:08:33.591 ************************************ 00:08:33.591 END TEST thread_poller_perf 00:08:33.591 ************************************ 00:08:33.591 00:08:33.591 real 0m1.592s 00:08:33.591 user 0m1.377s 00:08:33.591 sys 0m0.104s 00:08:33.591 08:42:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.591 08:42:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.591 08:42:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:33.591 00:08:33.591 real 0m3.495s 00:08:33.591 user 0m2.933s 00:08:33.591 sys 0m0.339s 00:08:33.591 08:42:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.591 08:42:04 thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.591 ************************************ 00:08:33.591 END TEST thread 00:08:33.591 ************************************ 00:08:33.591 08:42:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:33.591 08:42:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:33.591 08:42:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.591 08:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.591 08:42:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.591 ************************************ 00:08:33.591 START TEST app_cmdline 00:08:33.591 ************************************ 00:08:33.591 08:42:04 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:33.849 * Looking for test storage... 00:08:33.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:33.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.850 08:42:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.850 --rc genhtml_branch_coverage=1 00:08:33.850 --rc genhtml_function_coverage=1 00:08:33.850 --rc genhtml_legend=1 00:08:33.850 --rc geninfo_all_blocks=1 00:08:33.850 --rc geninfo_unexecuted_blocks=1 00:08:33.850 00:08:33.850 ' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.850 --rc genhtml_branch_coverage=1 00:08:33.850 --rc genhtml_function_coverage=1 00:08:33.850 --rc genhtml_legend=1 00:08:33.850 --rc geninfo_all_blocks=1 00:08:33.850 --rc geninfo_unexecuted_blocks=1 00:08:33.850 00:08:33.850 ' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.850 --rc genhtml_branch_coverage=1 00:08:33.850 --rc genhtml_function_coverage=1 00:08:33.850 --rc genhtml_legend=1 00:08:33.850 --rc geninfo_all_blocks=1 00:08:33.850 --rc geninfo_unexecuted_blocks=1 00:08:33.850 00:08:33.850 ' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.850 --rc genhtml_branch_coverage=1 00:08:33.850 --rc genhtml_function_coverage=1 00:08:33.850 --rc genhtml_legend=1 00:08:33.850 --rc geninfo_all_blocks=1 00:08:33.850 --rc geninfo_unexecuted_blocks=1 00:08:33.850 00:08:33.850 ' 00:08:33.850 08:42:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:33.850 08:42:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59764 00:08:33.850 08:42:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59764 00:08:33.850 08:42:04 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59764 ']' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.850 08:42:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:33.850 [2024-11-20 08:42:04.752845] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:33.850 [2024-11-20 08:42:04.753295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59764 ] 00:08:34.108 [2024-11-20 08:42:04.939390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.368 [2024-11-20 08:42:05.070808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.319 08:42:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.319 08:42:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:35.319 08:42:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:35.578 { 00:08:35.578 "version": "SPDK v25.01-pre git sha1 6fc96a60f", 00:08:35.578 "fields": { 00:08:35.578 "major": 25, 00:08:35.578 "minor": 1, 00:08:35.578 "patch": 0, 00:08:35.578 "suffix": "-pre", 00:08:35.578 "commit": "6fc96a60f" 00:08:35.578 } 00:08:35.578 } 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:35.578 08:42:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:35.578 08:42:06 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.837 request: 00:08:35.837 { 00:08:35.837 "method": "env_dpdk_get_mem_stats", 00:08:35.837 "req_id": 1 00:08:35.837 } 00:08:35.837 Got JSON-RPC error response 00:08:35.837 response: 00:08:35.837 { 00:08:35.837 "code": -32601, 00:08:35.837 "message": "Method not found" 00:08:35.837 } 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.837 08:42:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59764 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59764 ']' 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59764 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59764 00:08:35.837 killing process with pid 59764 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59764' 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@973 -- # kill 59764 00:08:35.837 08:42:06 app_cmdline -- common/autotest_common.sh@978 -- # wait 59764 00:08:38.375 00:08:38.375 real 0m4.437s 00:08:38.375 user 0m4.971s 00:08:38.375 sys 0m0.674s 00:08:38.375 08:42:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.375 ************************************ 00:08:38.375 END TEST app_cmdline 00:08:38.375 ************************************ 00:08:38.375 08:42:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.375 08:42:08 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.375 08:42:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.375 08:42:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.375 08:42:08 -- common/autotest_common.sh@10 -- # set +x 00:08:38.375 ************************************ 00:08:38.375 START TEST version 00:08:38.375 ************************************ 00:08:38.375 08:42:08 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.375 * Looking for test storage... 00:08:38.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.375 08:42:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.375 08:42:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.375 08:42:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.375 08:42:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.375 08:42:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.375 08:42:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.375 08:42:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.375 08:42:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.375 08:42:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.375 08:42:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.375 08:42:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.375 08:42:09 version -- scripts/common.sh@344 -- # case "$op" in 00:08:38.375 08:42:09 version -- scripts/common.sh@345 -- # : 1 00:08:38.375 08:42:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.375 08:42:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.375 08:42:09 version -- scripts/common.sh@365 -- # decimal 1 00:08:38.375 08:42:09 version -- scripts/common.sh@353 -- # local d=1 00:08:38.375 08:42:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.375 08:42:09 version -- scripts/common.sh@355 -- # echo 1 00:08:38.375 08:42:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.375 08:42:09 version -- scripts/common.sh@366 -- # decimal 2 00:08:38.375 08:42:09 version -- scripts/common.sh@353 -- # local d=2 00:08:38.375 08:42:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.375 08:42:09 version -- scripts/common.sh@355 -- # echo 2 00:08:38.375 08:42:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.375 08:42:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.375 08:42:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.375 08:42:09 version -- scripts/common.sh@368 -- # return 0 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.375 --rc genhtml_branch_coverage=1 00:08:38.375 --rc genhtml_function_coverage=1 00:08:38.375 --rc genhtml_legend=1 00:08:38.375 --rc geninfo_all_blocks=1 00:08:38.375 --rc geninfo_unexecuted_blocks=1 00:08:38.375 00:08:38.375 ' 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.375 --rc genhtml_branch_coverage=1 00:08:38.375 --rc genhtml_function_coverage=1 00:08:38.375 --rc genhtml_legend=1 00:08:38.375 --rc geninfo_all_blocks=1 00:08:38.375 --rc geninfo_unexecuted_blocks=1 00:08:38.375 00:08:38.375 ' 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.375 --rc genhtml_branch_coverage=1 00:08:38.375 --rc genhtml_function_coverage=1 00:08:38.375 --rc genhtml_legend=1 00:08:38.375 --rc geninfo_all_blocks=1 00:08:38.375 --rc geninfo_unexecuted_blocks=1 00:08:38.375 00:08:38.375 ' 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.375 --rc genhtml_branch_coverage=1 00:08:38.375 --rc genhtml_function_coverage=1 00:08:38.375 --rc genhtml_legend=1 00:08:38.375 --rc geninfo_all_blocks=1 00:08:38.375 --rc geninfo_unexecuted_blocks=1 00:08:38.375 00:08:38.375 ' 00:08:38.375 08:42:09 version -- app/version.sh@17 -- # get_header_version major 00:08:38.375 08:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # cut -f2 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.375 08:42:09 version -- app/version.sh@17 -- # major=25 00:08:38.375 08:42:09 version -- app/version.sh@18 -- # get_header_version minor 00:08:38.375 08:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # cut -f2 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.375 08:42:09 version -- app/version.sh@18 -- # minor=1 00:08:38.375 08:42:09 version -- app/version.sh@19 -- # get_header_version patch 00:08:38.375 08:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # cut -f2 00:08:38.375 08:42:09 version -- app/version.sh@19 -- # patch=0 00:08:38.375 08:42:09 version -- app/version.sh@20 -- # get_header_version suffix 00:08:38.375 08:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.375 08:42:09 version -- app/version.sh@14 -- # cut -f2 00:08:38.375 08:42:09 version -- app/version.sh@20 -- # suffix=-pre 00:08:38.375 08:42:09 version -- app/version.sh@22 -- # version=25.1 00:08:38.375 08:42:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:38.375 08:42:09 version -- app/version.sh@28 -- # version=25.1rc0 00:08:38.375 08:42:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:38.375 08:42:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:38.375 08:42:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:38.375 08:42:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:38.375 00:08:38.375 real 0m0.277s 00:08:38.375 user 0m0.180s 00:08:38.375 sys 0m0.133s 00:08:38.375 ************************************ 00:08:38.375 END TEST version 00:08:38.375 ************************************ 00:08:38.375 08:42:09 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.375 08:42:09 version -- common/autotest_common.sh@10 -- # set +x 00:08:38.375 08:42:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:38.375 08:42:09 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:38.375 08:42:09 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:38.375 08:42:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.376 08:42:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.376 08:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:38.376 ************************************ 00:08:38.376 START TEST bdev_raid 00:08:38.376 ************************************ 00:08:38.376 08:42:09 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:38.634 * Looking for test storage... 00:08:38.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.634 08:42:09 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.634 --rc genhtml_branch_coverage=1 00:08:38.634 --rc genhtml_function_coverage=1 00:08:38.634 --rc genhtml_legend=1 00:08:38.634 --rc geninfo_all_blocks=1 00:08:38.634 --rc geninfo_unexecuted_blocks=1 00:08:38.634 00:08:38.634 ' 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.634 --rc genhtml_branch_coverage=1 00:08:38.634 --rc genhtml_function_coverage=1 00:08:38.634 --rc genhtml_legend=1 00:08:38.634 --rc geninfo_all_blocks=1 00:08:38.634 --rc geninfo_unexecuted_blocks=1 00:08:38.634 00:08:38.634 ' 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.634 --rc genhtml_branch_coverage=1 00:08:38.634 --rc genhtml_function_coverage=1 00:08:38.634 --rc genhtml_legend=1 00:08:38.634 --rc geninfo_all_blocks=1 00:08:38.634 --rc geninfo_unexecuted_blocks=1 00:08:38.634 00:08:38.634 ' 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.634 --rc genhtml_branch_coverage=1 00:08:38.634 --rc genhtml_function_coverage=1 00:08:38.634 --rc genhtml_legend=1 00:08:38.634 --rc geninfo_all_blocks=1 00:08:38.634 --rc geninfo_unexecuted_blocks=1 00:08:38.634 00:08:38.634 ' 00:08:38.634 08:42:09 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:38.634 08:42:09 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:38.634 08:42:09 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:38.634 08:42:09 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:38.634 08:42:09 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:38.634 08:42:09 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:38.634 08:42:09 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:38.634 08:42:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.635 08:42:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.635 08:42:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.635 ************************************ 00:08:38.635 START TEST raid1_resize_data_offset_test 00:08:38.635 ************************************ 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59957 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.635 Process raid pid: 59957 00:08:38.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59957' 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59957 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59957 ']' 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.635 08:42:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.927 [2024-11-20 08:42:09.565249] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:38.927 [2024-11-20 08:42:09.565690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.927 [2024-11-20 08:42:09.751136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.220 [2024-11-20 08:42:09.890751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.220 [2024-11-20 08:42:10.098694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.220 [2024-11-20 08:42:10.098749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.787 malloc0 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.787 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.046 malloc1 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.046 null0 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.046 [2024-11-20 08:42:10.790586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:40.046 [2024-11-20 08:42:10.793102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:40.046 [2024-11-20 08:42:10.793204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:40.046 [2024-11-20 08:42:10.793452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:40.046 [2024-11-20 08:42:10.793477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:40.046 [2024-11-20 08:42:10.793840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:40.046 [2024-11-20 08:42:10.794069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:40.046 [2024-11-20 08:42:10.794091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:40.046 [2024-11-20 08:42:10.794321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.046 [2024-11-20 08:42:10.846645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.046 08:42:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.613 malloc2 00:08:40.613 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.613 08:42:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:40.613 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.613 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.613 [2024-11-20 08:42:11.400786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:40.614 [2024-11-20 08:42:11.417992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.614 [2024-11-20 08:42:11.420469] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59957 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59957 ']' 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59957 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59957 00:08:40.614 killing process with pid 59957 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59957' 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59957 00:08:40.614 08:42:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59957 00:08:40.614 [2024-11-20 08:42:11.509356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.614 [2024-11-20 08:42:11.511472] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:40.614 [2024-11-20 08:42:11.511565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.614 [2024-11-20 08:42:11.511592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:40.873 [2024-11-20 08:42:11.543746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.873 [2024-11-20 08:42:11.544214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.873 [2024-11-20 08:42:11.544241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:42.774 [2024-11-20 08:42:13.240479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.710 08:42:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:43.710 00:08:43.710 real 0m4.819s 00:08:43.710 user 0m4.835s 00:08:43.710 sys 0m0.632s 00:08:43.710 ************************************ 00:08:43.710 END TEST raid1_resize_data_offset_test 00:08:43.710 ************************************ 00:08:43.710 08:42:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.710 08:42:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.710 08:42:14 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:43.710 08:42:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.710 08:42:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.710 08:42:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.710 ************************************ 00:08:43.710 START TEST raid0_resize_superblock_test 00:08:43.710 ************************************ 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:43.710 Process raid pid: 60041 00:08:43.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60041 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60041' 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60041 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60041 ']' 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.710 08:42:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.710 [2024-11-20 08:42:14.445090] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:43.710 [2024-11-20 08:42:14.445532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.969 [2024-11-20 08:42:14.634998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.969 [2024-11-20 08:42:14.769401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.228 [2024-11-20 08:42:14.980324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.228 [2024-11-20 08:42:14.980633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.794 08:42:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.794 08:42:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.794 08:42:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:44.794 08:42:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.794 08:42:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 malloc0 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 [2024-11-20 08:42:16.056197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:45.363 [2024-11-20 08:42:16.056308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.363 [2024-11-20 08:42:16.056357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:45.363 [2024-11-20 08:42:16.056384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.363 [2024-11-20 08:42:16.059724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.363 [2024-11-20 08:42:16.059783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:45.363 pt0 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 bcec6721-9b8b-4016-a93b-302e83153cdf 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 77cfc227-dbfb-40a9-b516-fe80b7377426 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 b4aa72cc-be77-431c-9c84-e3df458cbb7e 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 [2024-11-20 08:42:16.208754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 77cfc227-dbfb-40a9-b516-fe80b7377426 is claimed 00:08:45.363 [2024-11-20 08:42:16.208914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b4aa72cc-be77-431c-9c84-e3df458cbb7e is claimed 00:08:45.363 [2024-11-20 08:42:16.209225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:45.363 [2024-11-20 08:42:16.209255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:45.363 [2024-11-20 08:42:16.209612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:45.363 [2024-11-20 08:42:16.209873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:45.363 [2024-11-20 08:42:16.209891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:45.363 [2024-11-20 08:42:16.210097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.622 [2024-11-20 08:42:16.321009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.622 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.622 [2024-11-20 08:42:16.369040] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.623 [2024-11-20 08:42:16.369087] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '77cfc227-dbfb-40a9-b516-fe80b7377426' was resized: old size 131072, new size 204800 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.623 [2024-11-20 08:42:16.376981] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.623 [2024-11-20 08:42:16.377171] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b4aa72cc-be77-431c-9c84-e3df458cbb7e' was resized: old size 131072, new size 204800 00:08:45.623 [2024-11-20 08:42:16.377239] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.623 [2024-11-20 08:42:16.493070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.623 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.882 [2024-11-20 08:42:16.536813] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:45.882 [2024-11-20 08:42:16.537046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:45.882 [2024-11-20 08:42:16.537076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.882 [2024-11-20 08:42:16.537103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:45.882 [2024-11-20 08:42:16.537273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.882 [2024-11-20 08:42:16.537328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.882 [2024-11-20 08:42:16.537348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.882 [2024-11-20 08:42:16.548727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:45.882 [2024-11-20 08:42:16.548969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.882 [2024-11-20 08:42:16.549130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:45.882 [2024-11-20 08:42:16.549284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.882 [2024-11-20 08:42:16.552418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.882 [2024-11-20 08:42:16.552602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:45.882 pt0 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.882 [2024-11-20 08:42:16.555139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 77cfc227-dbfb-40a9-b516-fe80b7377426 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.882 [2024-11-20 08:42:16.555386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 77cfc227-dbfb-40a9-b516-fe80b7377426 is claimed 00:08:45.882 [2024-11-20 08:42:16.555564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b4aa72cc-be77-431c-9c84-e3df458cbb7e 00:08:45.882 [2024-11-20 08:42:16.555744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b4aa72cc-be77-431c-9c84-e3df458cbb7e is claimed 00:08:45.882 [2024-11-20 08:42:16.555981] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b4aa72cc-be77-431c-9c84-e3df458cbb7e (2) smaller than existing raid bdev Raid (3) 00:08:45.882 [2024-11-20 08:42:16.556168] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 77cfc227-dbfb-40a9-b516-fe80b7377426: File exists 00:08:45.882 [2024-11-20 08:42:16.556297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:45.882 [2024-11-20 08:42:16.556352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:45.882 [2024-11-20 08:42:16.556759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:45.882 [2024-11-20 08:42:16.557077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:45.882 [2024-11-20 08:42:16.557221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:45.882 [2024-11-20 08:42:16.557477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.882 [2024-11-20 08:42:16.573744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60041 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60041 ']' 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60041 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60041 00:08:45.882 killing process with pid 60041 00:08:45.882 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.883 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.883 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60041' 00:08:45.883 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60041 00:08:45.883 [2024-11-20 08:42:16.651200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.883 08:42:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60041 00:08:45.883 [2024-11-20 08:42:16.651316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.883 [2024-11-20 08:42:16.651382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.883 [2024-11-20 08:42:16.651397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:47.263 [2024-11-20 08:42:17.943922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.199 08:42:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:48.199 00:08:48.199 real 0m4.649s 00:08:48.199 user 0m5.029s 00:08:48.199 sys 0m0.637s 00:08:48.199 08:42:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.199 08:42:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.199 ************************************ 00:08:48.199 END TEST raid0_resize_superblock_test 00:08:48.199 ************************************ 00:08:48.199 08:42:19 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:48.199 08:42:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.199 08:42:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.199 08:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.199 ************************************ 00:08:48.199 START TEST raid1_resize_superblock_test 00:08:48.199 ************************************ 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60139 00:08:48.199 Process raid pid: 60139 00:08:48.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60139' 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60139 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60139 ']' 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.199 08:42:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.458 [2024-11-20 08:42:19.141365] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:48.458 [2024-11-20 08:42:19.141778] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.458 [2024-11-20 08:42:19.320201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.716 [2024-11-20 08:42:19.451513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.975 [2024-11-20 08:42:19.656533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.975 [2024-11-20 08:42:19.656785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.233 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.233 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:49.234 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:49.234 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.234 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.800 malloc0 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.800 [2024-11-20 08:42:20.642726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:49.800 [2024-11-20 08:42:20.642946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.800 [2024-11-20 08:42:20.643130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:49.800 [2024-11-20 08:42:20.643274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.800 [2024-11-20 08:42:20.646284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.800 [2024-11-20 08:42:20.646455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:49.800 pt0 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.800 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 84a64b66-5088-4ed1-a25c-7682b5d7a141 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 9f8e05ee-c65c-43c5-8b52-d106b26fc08e 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 578fdbaf-60b6-4847-9442-e236b06efd65 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 [2024-11-20 08:42:20.779475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9f8e05ee-c65c-43c5-8b52-d106b26fc08e is claimed 00:08:50.060 [2024-11-20 08:42:20.779644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 578fdbaf-60b6-4847-9442-e236b06efd65 is claimed 00:08:50.060 [2024-11-20 08:42:20.779858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:50.060 [2024-11-20 08:42:20.779885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:50.060 [2024-11-20 08:42:20.780283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.060 [2024-11-20 08:42:20.780548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:50.060 [2024-11-20 08:42:20.780565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:50.060 [2024-11-20 08:42:20.780792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:50.060 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.061 [2024-11-20 08:42:20.903784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.061 [2024-11-20 08:42:20.947776] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:50.061 [2024-11-20 08:42:20.947816] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9f8e05ee-c65c-43c5-8b52-d106b26fc08e' was resized: old size 131072, new size 204800 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.061 [2024-11-20 08:42:20.959687] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:50.061 [2024-11-20 08:42:20.959725] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '578fdbaf-60b6-4847-9442-e236b06efd65' was resized: old size 131072, new size 204800 00:08:50.061 [2024-11-20 08:42:20.959764] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:50.061 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 08:42:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 [2024-11-20 08:42:21.131843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 [2024-11-20 08:42:21.183566] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:50.321 [2024-11-20 08:42:21.183676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:50.321 [2024-11-20 08:42:21.183717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:50.321 [2024-11-20 08:42:21.183925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.321 [2024-11-20 08:42:21.184202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.321 [2024-11-20 08:42:21.184302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.321 [2024-11-20 08:42:21.184325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 [2024-11-20 08:42:21.191461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:50.321 [2024-11-20 08:42:21.191548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.321 [2024-11-20 08:42:21.191580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:50.321 [2024-11-20 08:42:21.191602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.321 [2024-11-20 08:42:21.194467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.321 [2024-11-20 08:42:21.194520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:50.321 pt0 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.321 [2024-11-20 08:42:21.196817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9f8e05ee-c65c-43c5-8b52-d106b26fc08e 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:50.321 [2024-11-20 08:42:21.197050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9f8e05ee-c65c-43c5-8b52-d106b26fc08e is claimed 00:08:50.321 [2024-11-20 08:42:21.197239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 578fdbaf-60b6-4847-9442-e236b06efd65 00:08:50.321 [2024-11-20 08:42:21.197275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 578fdbaf-60b6-4847-9442-e236b06efd65 is claimed 00:08:50.321 [2024-11-20 08:42:21.197435] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 578fdbaf-60b6-4847-9442-e236b06efd65 (2) smaller than existing raid bdev Raid (3) 00:08:50.321 [2024-11-20 08:42:21.197472] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9f8e05ee-c65c-43c5-8b52-d106b26fc08e: File exists 00:08:50.321 [2024-11-20 08:42:21.197530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:50.321 [2024-11-20 08:42:21.197550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.321 [2024-11-20 08:42:21.197885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 [2024-11-20 08:42:21.198090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:50.321 [2024-11-20 08:42:21.198112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:50.321 [2024-11-20 08:42:21.198356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:50.321 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:50.321 [2024-11-20 08:42:21.215833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60139 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60139 ']' 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60139 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60139 00:08:50.623 killing process with pid 60139 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60139' 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60139 00:08:50.623 [2024-11-20 08:42:21.300228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.623 [2024-11-20 08:42:21.300331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.623 08:42:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60139 00:08:50.623 [2024-11-20 08:42:21.300405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.623 [2024-11-20 08:42:21.300420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:51.999 [2024-11-20 08:42:22.597514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.934 08:42:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:52.934 00:08:52.934 real 0m4.605s 00:08:52.934 user 0m4.966s 00:08:52.934 sys 0m0.624s 00:08:52.934 08:42:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.934 ************************************ 00:08:52.934 END TEST raid1_resize_superblock_test 00:08:52.934 ************************************ 00:08:52.934 08:42:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.934 08:42:23 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:52.934 08:42:23 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:52.934 08:42:23 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:52.934 08:42:23 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:52.934 08:42:23 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:52.934 08:42:23 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:52.934 08:42:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.934 08:42:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.934 08:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.934 ************************************ 00:08:52.934 START TEST raid_function_test_raid0 00:08:52.934 ************************************ 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:52.935 Process raid pid: 60242 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60242 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60242' 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60242 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60242 ']' 00:08:52.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.935 08:42:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:52.935 [2024-11-20 08:42:23.825801] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:52.935 [2024-11-20 08:42:23.825984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.193 [2024-11-20 08:42:24.018312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.452 [2024-11-20 08:42:24.181693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.711 [2024-11-20 08:42:24.418465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.711 [2024-11-20 08:42:24.418757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.970 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.970 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:53.970 08:42:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:53.970 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.970 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:54.229 Base_1 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:54.229 Base_2 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:54.229 [2024-11-20 08:42:24.965632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:54.229 [2024-11-20 08:42:24.968078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:54.229 [2024-11-20 08:42:24.968342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:54.229 [2024-11-20 08:42:24.968373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:54.229 [2024-11-20 08:42:24.968715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:54.229 [2024-11-20 08:42:24.968912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:54.229 [2024-11-20 08:42:24.968929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:54.229 [2024-11-20 08:42:24.969122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:54.229 08:42:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:54.229 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:54.230 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:54.230 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:54.488 [2024-11-20 08:42:25.357809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:54.488 /dev/nbd0 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.488 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.746 1+0 records in 00:08:54.746 1+0 records out 00:08:54.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369966 s, 11.1 MB/s 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:54.746 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:55.004 { 00:08:55.004 "nbd_device": "/dev/nbd0", 00:08:55.004 "bdev_name": "raid" 00:08:55.004 } 00:08:55.004 ]' 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:55.004 { 00:08:55.004 "nbd_device": "/dev/nbd0", 00:08:55.004 "bdev_name": "raid" 00:08:55.004 } 00:08:55.004 ]' 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:55.004 4096+0 records in 00:08:55.004 4096+0 records out 00:08:55.004 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0321062 s, 65.3 MB/s 00:08:55.004 08:42:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:55.262 4096+0 records in 00:08:55.262 4096+0 records out 00:08:55.262 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.332794 s, 6.3 MB/s 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:55.521 128+0 records in 00:08:55.521 128+0 records out 00:08:55.521 65536 bytes (66 kB, 64 KiB) copied, 0.00112036 s, 58.5 MB/s 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:55.521 2035+0 records in 00:08:55.521 2035+0 records out 00:08:55.521 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0129813 s, 80.3 MB/s 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:55.521 456+0 records in 00:08:55.521 456+0 records out 00:08:55.521 233472 bytes (233 kB, 228 KiB) copied, 0.00195244 s, 120 MB/s 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.521 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.779 [2024-11-20 08:42:26.626703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:55.779 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:56.346 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:56.346 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.346 08:42:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:56.346 08:42:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60242 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60242 ']' 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60242 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60242 00:08:56.347 killing process with pid 60242 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60242' 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60242 00:08:56.347 08:42:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60242 00:08:56.347 [2024-11-20 08:42:27.056633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.347 [2024-11-20 08:42:27.056757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.347 [2024-11-20 08:42:27.056825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.347 [2024-11-20 08:42:27.056849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:56.347 [2024-11-20 08:42:27.243673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.723 08:42:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:57.723 00:08:57.723 real 0m4.578s 00:08:57.723 user 0m5.774s 00:08:57.723 sys 0m1.062s 00:08:57.723 08:42:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.723 ************************************ 00:08:57.723 08:42:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:57.723 END TEST raid_function_test_raid0 00:08:57.723 ************************************ 00:08:57.723 08:42:28 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:57.723 08:42:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.723 08:42:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.723 08:42:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.723 ************************************ 00:08:57.723 START TEST raid_function_test_concat 00:08:57.723 ************************************ 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60378 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.723 Process raid pid: 60378 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60378' 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60378 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60378 ']' 00:08:57.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.723 08:42:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:57.723 [2024-11-20 08:42:28.459475] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:08:57.723 [2024-11-20 08:42:28.459664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.982 [2024-11-20 08:42:28.639657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.982 [2024-11-20 08:42:28.778688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.241 [2024-11-20 08:42:28.983911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.241 [2024-11-20 08:42:28.983973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.498 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.498 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:58.498 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:58.498 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.498 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:58.756 Base_1 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:58.756 Base_2 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:58.756 [2024-11-20 08:42:29.476439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:58.756 [2024-11-20 08:42:29.478912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:58.756 [2024-11-20 08:42:29.479036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:58.756 [2024-11-20 08:42:29.479070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:58.756 [2024-11-20 08:42:29.479490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.756 [2024-11-20 08:42:29.479717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:58.756 [2024-11-20 08:42:29.479754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:58.756 [2024-11-20 08:42:29.479978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:58.756 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:58.757 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:58.757 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:58.757 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:59.036 [2024-11-20 08:42:29.816561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:59.036 /dev/nbd0 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.036 1+0 records in 00:08:59.036 1+0 records out 00:08:59.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253117 s, 16.2 MB/s 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:59.036 08:42:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:59.295 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:59.295 { 00:08:59.295 "nbd_device": "/dev/nbd0", 00:08:59.295 "bdev_name": "raid" 00:08:59.295 } 00:08:59.295 ]' 00:08:59.295 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:59.295 { 00:08:59.295 "nbd_device": "/dev/nbd0", 00:08:59.295 "bdev_name": "raid" 00:08:59.295 } 00:08:59.295 ]' 00:08:59.295 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:59.554 4096+0 records in 00:08:59.554 4096+0 records out 00:08:59.554 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0254605 s, 82.4 MB/s 00:08:59.554 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:59.813 4096+0 records in 00:08:59.813 4096+0 records out 00:08:59.813 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.308787 s, 6.8 MB/s 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:59.813 128+0 records in 00:08:59.813 128+0 records out 00:08:59.813 65536 bytes (66 kB, 64 KiB) copied, 0.000702497 s, 93.3 MB/s 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:59.813 2035+0 records in 00:08:59.813 2035+0 records out 00:08:59.813 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00945833 s, 110 MB/s 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:59.813 456+0 records in 00:08:59.813 456+0 records out 00:08:59.813 233472 bytes (233 kB, 228 KiB) copied, 0.00381625 s, 61.2 MB/s 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:59.813 08:42:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:59.814 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:59.814 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:59.814 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:59.814 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:59.814 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.814 08:42:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.382 [2024-11-20 08:42:31.022680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:00.382 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60378 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60378 ']' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60378 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60378 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.641 killing process with pid 60378 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60378' 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60378 00:09:00.641 [2024-11-20 08:42:31.462199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.641 08:42:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60378 00:09:00.641 [2024-11-20 08:42:31.462334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.642 [2024-11-20 08:42:31.462405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.642 [2024-11-20 08:42:31.462425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:00.900 [2024-11-20 08:42:31.649731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.837 08:42:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:01.837 00:09:01.837 real 0m4.329s 00:09:01.837 user 0m5.311s 00:09:01.837 sys 0m1.064s 00:09:01.837 08:42:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.837 08:42:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:01.837 ************************************ 00:09:01.837 END TEST raid_function_test_concat 00:09:01.837 ************************************ 00:09:01.837 08:42:32 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:01.837 08:42:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.837 08:42:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.837 08:42:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.837 ************************************ 00:09:01.837 START TEST raid0_resize_test 00:09:01.837 ************************************ 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60513 00:09:01.837 Process raid pid: 60513 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60513' 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60513 00:09:01.837 08:42:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.838 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60513 ']' 00:09:01.838 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.838 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.838 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.838 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.838 08:42:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.097 [2024-11-20 08:42:32.847841] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:02.097 [2024-11-20 08:42:32.848018] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.356 [2024-11-20 08:42:33.043744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.356 [2024-11-20 08:42:33.205755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.614 [2024-11-20 08:42:33.419120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.614 [2024-11-20 08:42:33.419197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.182 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 Base_1 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 Base_2 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 [2024-11-20 08:42:33.939444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:03.183 [2024-11-20 08:42:33.942079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:03.183 [2024-11-20 08:42:33.942175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:03.183 [2024-11-20 08:42:33.942225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:03.183 [2024-11-20 08:42:33.942595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:03.183 [2024-11-20 08:42:33.942781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:03.183 [2024-11-20 08:42:33.942906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:03.183 [2024-11-20 08:42:33.943122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 [2024-11-20 08:42:33.947438] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:03.183 [2024-11-20 08:42:33.947492] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:03.183 true 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:03.183 [2024-11-20 08:42:33.959698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:33 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 [2024-11-20 08:42:34.007503] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:03.183 [2024-11-20 08:42:34.007549] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:03.183 [2024-11-20 08:42:34.007606] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:03.183 true 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:03.183 [2024-11-20 08:42:34.019743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60513 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60513 ']' 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60513 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.183 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60513 00:09:03.442 killing process with pid 60513 00:09:03.442 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.442 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.442 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60513' 00:09:03.442 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60513 00:09:03.442 08:42:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60513 00:09:03.442 [2024-11-20 08:42:34.103124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.442 [2024-11-20 08:42:34.103267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.442 [2024-11-20 08:42:34.103343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.442 [2024-11-20 08:42:34.103361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:03.442 [2024-11-20 08:42:34.119291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.378 08:42:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:04.378 00:09:04.378 real 0m2.450s 00:09:04.378 user 0m2.765s 00:09:04.378 sys 0m0.409s 00:09:04.378 08:42:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.378 ************************************ 00:09:04.378 END TEST raid0_resize_test 00:09:04.378 ************************************ 00:09:04.378 08:42:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.378 08:42:35 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:04.378 08:42:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.378 08:42:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.378 08:42:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.378 ************************************ 00:09:04.378 START TEST raid1_resize_test 00:09:04.378 ************************************ 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60569 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60569' 00:09:04.378 Process raid pid: 60569 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60569 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60569 ']' 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.378 08:42:35 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.638 [2024-11-20 08:42:35.347083] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:04.638 [2024-11-20 08:42:35.347283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.638 [2024-11-20 08:42:35.528827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.897 [2024-11-20 08:42:35.678794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.157 [2024-11-20 08:42:35.898563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.157 [2024-11-20 08:42:35.898856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.725 Base_1 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.725 Base_2 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.725 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.725 [2024-11-20 08:42:36.432646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:05.725 [2024-11-20 08:42:36.435154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:05.725 [2024-11-20 08:42:36.435473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:05.725 [2024-11-20 08:42:36.435510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:05.725 [2024-11-20 08:42:36.435922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:05.726 [2024-11-20 08:42:36.436099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:05.726 [2024-11-20 08:42:36.436116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:05.726 [2024-11-20 08:42:36.436356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 [2024-11-20 08:42:36.440615] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:05.726 [2024-11-20 08:42:36.440786] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:05.726 true 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 [2024-11-20 08:42:36.452856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 [2024-11-20 08:42:36.504684] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:05.726 [2024-11-20 08:42:36.504861] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:05.726 [2024-11-20 08:42:36.505037] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:05.726 true 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:05.726 [2024-11-20 08:42:36.516898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60569 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60569 ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60569 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60569 00:09:05.726 killing process with pid 60569 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60569' 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60569 00:09:05.726 08:42:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60569 00:09:05.726 [2024-11-20 08:42:36.598328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.726 [2024-11-20 08:42:36.598450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.726 [2024-11-20 08:42:36.599067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.726 [2024-11-20 08:42:36.599102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:05.726 [2024-11-20 08:42:36.614473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.103 08:42:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:07.103 00:09:07.103 real 0m2.469s 00:09:07.103 user 0m2.772s 00:09:07.103 sys 0m0.401s 00:09:07.103 08:42:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.103 08:42:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.103 ************************************ 00:09:07.103 END TEST raid1_resize_test 00:09:07.103 ************************************ 00:09:07.103 08:42:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:07.103 08:42:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:07.103 08:42:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:07.103 08:42:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.103 08:42:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.103 08:42:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.103 ************************************ 00:09:07.103 START TEST raid_state_function_test 00:09:07.103 ************************************ 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:07.103 Process raid pid: 60632 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60632 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60632' 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60632 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60632 ']' 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.103 08:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.103 [2024-11-20 08:42:37.853947] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:07.103 [2024-11-20 08:42:37.854324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.362 [2024-11-20 08:42:38.031023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.362 [2024-11-20 08:42:38.160711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.622 [2024-11-20 08:42:38.374079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.622 [2024-11-20 08:42:38.374140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.191 [2024-11-20 08:42:38.824650] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.191 [2024-11-20 08:42:38.824931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.191 [2024-11-20 08:42:38.824965] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.191 [2024-11-20 08:42:38.824988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.191 "name": "Existed_Raid", 00:09:08.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.191 "strip_size_kb": 64, 00:09:08.191 "state": "configuring", 00:09:08.191 "raid_level": "raid0", 00:09:08.191 "superblock": false, 00:09:08.191 "num_base_bdevs": 2, 00:09:08.191 "num_base_bdevs_discovered": 0, 00:09:08.191 "num_base_bdevs_operational": 2, 00:09:08.191 "base_bdevs_list": [ 00:09:08.191 { 00:09:08.191 "name": "BaseBdev1", 00:09:08.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.191 "is_configured": false, 00:09:08.191 "data_offset": 0, 00:09:08.191 "data_size": 0 00:09:08.191 }, 00:09:08.191 { 00:09:08.191 "name": "BaseBdev2", 00:09:08.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.191 "is_configured": false, 00:09:08.191 "data_offset": 0, 00:09:08.191 "data_size": 0 00:09:08.191 } 00:09:08.191 ] 00:09:08.191 }' 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.191 08:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 [2024-11-20 08:42:39.336735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.449 [2024-11-20 08:42:39.336783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 [2024-11-20 08:42:39.344698] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.449 [2024-11-20 08:42:39.344764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.449 [2024-11-20 08:42:39.344785] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.449 [2024-11-20 08:42:39.344808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.449 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.744 [2024-11-20 08:42:39.391787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.744 BaseBdev1 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.744 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.744 [ 00:09:08.744 { 00:09:08.744 "name": "BaseBdev1", 00:09:08.744 "aliases": [ 00:09:08.744 "68996b66-8071-4b7e-ad04-3eac7a5a6d09" 00:09:08.744 ], 00:09:08.744 "product_name": "Malloc disk", 00:09:08.744 "block_size": 512, 00:09:08.744 "num_blocks": 65536, 00:09:08.744 "uuid": "68996b66-8071-4b7e-ad04-3eac7a5a6d09", 00:09:08.744 "assigned_rate_limits": { 00:09:08.744 "rw_ios_per_sec": 0, 00:09:08.744 "rw_mbytes_per_sec": 0, 00:09:08.744 "r_mbytes_per_sec": 0, 00:09:08.744 "w_mbytes_per_sec": 0 00:09:08.744 }, 00:09:08.744 "claimed": true, 00:09:08.744 "claim_type": "exclusive_write", 00:09:08.744 "zoned": false, 00:09:08.744 "supported_io_types": { 00:09:08.744 "read": true, 00:09:08.744 "write": true, 00:09:08.744 "unmap": true, 00:09:08.744 "flush": true, 00:09:08.744 "reset": true, 00:09:08.744 "nvme_admin": false, 00:09:08.744 "nvme_io": false, 00:09:08.745 "nvme_io_md": false, 00:09:08.745 "write_zeroes": true, 00:09:08.745 "zcopy": true, 00:09:08.745 "get_zone_info": false, 00:09:08.745 "zone_management": false, 00:09:08.745 "zone_append": false, 00:09:08.745 "compare": false, 00:09:08.745 "compare_and_write": false, 00:09:08.745 "abort": true, 00:09:08.745 "seek_hole": false, 00:09:08.745 "seek_data": false, 00:09:08.745 "copy": true, 00:09:08.745 "nvme_iov_md": false 00:09:08.745 }, 00:09:08.745 "memory_domains": [ 00:09:08.745 { 00:09:08.745 "dma_device_id": "system", 00:09:08.745 "dma_device_type": 1 00:09:08.745 }, 00:09:08.745 { 00:09:08.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.745 "dma_device_type": 2 00:09:08.745 } 00:09:08.745 ], 00:09:08.745 "driver_specific": {} 00:09:08.745 } 00:09:08.745 ] 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.745 "name": "Existed_Raid", 00:09:08.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.745 "strip_size_kb": 64, 00:09:08.745 "state": "configuring", 00:09:08.745 "raid_level": "raid0", 00:09:08.745 "superblock": false, 00:09:08.745 "num_base_bdevs": 2, 00:09:08.745 "num_base_bdevs_discovered": 1, 00:09:08.745 "num_base_bdevs_operational": 2, 00:09:08.745 "base_bdevs_list": [ 00:09:08.745 { 00:09:08.745 "name": "BaseBdev1", 00:09:08.745 "uuid": "68996b66-8071-4b7e-ad04-3eac7a5a6d09", 00:09:08.745 "is_configured": true, 00:09:08.745 "data_offset": 0, 00:09:08.745 "data_size": 65536 00:09:08.745 }, 00:09:08.745 { 00:09:08.745 "name": "BaseBdev2", 00:09:08.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.745 "is_configured": false, 00:09:08.745 "data_offset": 0, 00:09:08.745 "data_size": 0 00:09:08.745 } 00:09:08.745 ] 00:09:08.745 }' 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.745 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.310 [2024-11-20 08:42:39.956054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.310 [2024-11-20 08:42:39.956290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.310 [2024-11-20 08:42:39.968112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.310 [2024-11-20 08:42:39.970829] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.310 [2024-11-20 08:42:39.970895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.310 08:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.310 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.310 "name": "Existed_Raid", 00:09:09.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.310 "strip_size_kb": 64, 00:09:09.310 "state": "configuring", 00:09:09.310 "raid_level": "raid0", 00:09:09.310 "superblock": false, 00:09:09.310 "num_base_bdevs": 2, 00:09:09.310 "num_base_bdevs_discovered": 1, 00:09:09.310 "num_base_bdevs_operational": 2, 00:09:09.310 "base_bdevs_list": [ 00:09:09.310 { 00:09:09.310 "name": "BaseBdev1", 00:09:09.310 "uuid": "68996b66-8071-4b7e-ad04-3eac7a5a6d09", 00:09:09.310 "is_configured": true, 00:09:09.310 "data_offset": 0, 00:09:09.310 "data_size": 65536 00:09:09.310 }, 00:09:09.310 { 00:09:09.310 "name": "BaseBdev2", 00:09:09.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.310 "is_configured": false, 00:09:09.310 "data_offset": 0, 00:09:09.310 "data_size": 0 00:09:09.310 } 00:09:09.310 ] 00:09:09.310 }' 00:09:09.310 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.310 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.876 [2024-11-20 08:42:40.560305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.876 [2024-11-20 08:42:40.560411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.876 [2024-11-20 08:42:40.560429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:09.876 [2024-11-20 08:42:40.560780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.876 [2024-11-20 08:42:40.561036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.876 [2024-11-20 08:42:40.561092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:09.876 [2024-11-20 08:42:40.561508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.876 BaseBdev2 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.876 [ 00:09:09.876 { 00:09:09.876 "name": "BaseBdev2", 00:09:09.876 "aliases": [ 00:09:09.876 "98245e93-9517-4bdf-b4cf-e3c315e16bfe" 00:09:09.876 ], 00:09:09.876 "product_name": "Malloc disk", 00:09:09.876 "block_size": 512, 00:09:09.876 "num_blocks": 65536, 00:09:09.876 "uuid": "98245e93-9517-4bdf-b4cf-e3c315e16bfe", 00:09:09.876 "assigned_rate_limits": { 00:09:09.876 "rw_ios_per_sec": 0, 00:09:09.876 "rw_mbytes_per_sec": 0, 00:09:09.876 "r_mbytes_per_sec": 0, 00:09:09.876 "w_mbytes_per_sec": 0 00:09:09.876 }, 00:09:09.876 "claimed": true, 00:09:09.876 "claim_type": "exclusive_write", 00:09:09.876 "zoned": false, 00:09:09.876 "supported_io_types": { 00:09:09.876 "read": true, 00:09:09.876 "write": true, 00:09:09.876 "unmap": true, 00:09:09.876 "flush": true, 00:09:09.876 "reset": true, 00:09:09.876 "nvme_admin": false, 00:09:09.876 "nvme_io": false, 00:09:09.876 "nvme_io_md": false, 00:09:09.876 "write_zeroes": true, 00:09:09.876 "zcopy": true, 00:09:09.876 "get_zone_info": false, 00:09:09.876 "zone_management": false, 00:09:09.876 "zone_append": false, 00:09:09.876 "compare": false, 00:09:09.876 "compare_and_write": false, 00:09:09.876 "abort": true, 00:09:09.876 "seek_hole": false, 00:09:09.876 "seek_data": false, 00:09:09.876 "copy": true, 00:09:09.876 "nvme_iov_md": false 00:09:09.876 }, 00:09:09.876 "memory_domains": [ 00:09:09.876 { 00:09:09.876 "dma_device_id": "system", 00:09:09.876 "dma_device_type": 1 00:09:09.876 }, 00:09:09.876 { 00:09:09.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.876 "dma_device_type": 2 00:09:09.876 } 00:09:09.876 ], 00:09:09.876 "driver_specific": {} 00:09:09.876 } 00:09:09.876 ] 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.876 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.876 "name": "Existed_Raid", 00:09:09.876 "uuid": "ed55beba-30ee-4c2f-ad8f-b837c4f25726", 00:09:09.876 "strip_size_kb": 64, 00:09:09.876 "state": "online", 00:09:09.876 "raid_level": "raid0", 00:09:09.876 "superblock": false, 00:09:09.877 "num_base_bdevs": 2, 00:09:09.877 "num_base_bdevs_discovered": 2, 00:09:09.877 "num_base_bdevs_operational": 2, 00:09:09.877 "base_bdevs_list": [ 00:09:09.877 { 00:09:09.877 "name": "BaseBdev1", 00:09:09.877 "uuid": "68996b66-8071-4b7e-ad04-3eac7a5a6d09", 00:09:09.877 "is_configured": true, 00:09:09.877 "data_offset": 0, 00:09:09.877 "data_size": 65536 00:09:09.877 }, 00:09:09.877 { 00:09:09.877 "name": "BaseBdev2", 00:09:09.877 "uuid": "98245e93-9517-4bdf-b4cf-e3c315e16bfe", 00:09:09.877 "is_configured": true, 00:09:09.877 "data_offset": 0, 00:09:09.877 "data_size": 65536 00:09:09.877 } 00:09:09.877 ] 00:09:09.877 }' 00:09:09.877 08:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.877 08:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.443 [2024-11-20 08:42:41.116810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.443 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.443 "name": "Existed_Raid", 00:09:10.443 "aliases": [ 00:09:10.443 "ed55beba-30ee-4c2f-ad8f-b837c4f25726" 00:09:10.443 ], 00:09:10.443 "product_name": "Raid Volume", 00:09:10.443 "block_size": 512, 00:09:10.443 "num_blocks": 131072, 00:09:10.443 "uuid": "ed55beba-30ee-4c2f-ad8f-b837c4f25726", 00:09:10.443 "assigned_rate_limits": { 00:09:10.443 "rw_ios_per_sec": 0, 00:09:10.443 "rw_mbytes_per_sec": 0, 00:09:10.443 "r_mbytes_per_sec": 0, 00:09:10.443 "w_mbytes_per_sec": 0 00:09:10.443 }, 00:09:10.443 "claimed": false, 00:09:10.443 "zoned": false, 00:09:10.443 "supported_io_types": { 00:09:10.443 "read": true, 00:09:10.443 "write": true, 00:09:10.443 "unmap": true, 00:09:10.443 "flush": true, 00:09:10.443 "reset": true, 00:09:10.443 "nvme_admin": false, 00:09:10.443 "nvme_io": false, 00:09:10.443 "nvme_io_md": false, 00:09:10.443 "write_zeroes": true, 00:09:10.443 "zcopy": false, 00:09:10.443 "get_zone_info": false, 00:09:10.443 "zone_management": false, 00:09:10.443 "zone_append": false, 00:09:10.444 "compare": false, 00:09:10.444 "compare_and_write": false, 00:09:10.444 "abort": false, 00:09:10.444 "seek_hole": false, 00:09:10.444 "seek_data": false, 00:09:10.444 "copy": false, 00:09:10.444 "nvme_iov_md": false 00:09:10.444 }, 00:09:10.444 "memory_domains": [ 00:09:10.444 { 00:09:10.444 "dma_device_id": "system", 00:09:10.444 "dma_device_type": 1 00:09:10.444 }, 00:09:10.444 { 00:09:10.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.444 "dma_device_type": 2 00:09:10.444 }, 00:09:10.444 { 00:09:10.444 "dma_device_id": "system", 00:09:10.444 "dma_device_type": 1 00:09:10.444 }, 00:09:10.444 { 00:09:10.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.444 "dma_device_type": 2 00:09:10.444 } 00:09:10.444 ], 00:09:10.444 "driver_specific": { 00:09:10.444 "raid": { 00:09:10.444 "uuid": "ed55beba-30ee-4c2f-ad8f-b837c4f25726", 00:09:10.444 "strip_size_kb": 64, 00:09:10.444 "state": "online", 00:09:10.444 "raid_level": "raid0", 00:09:10.444 "superblock": false, 00:09:10.444 "num_base_bdevs": 2, 00:09:10.444 "num_base_bdevs_discovered": 2, 00:09:10.444 "num_base_bdevs_operational": 2, 00:09:10.444 "base_bdevs_list": [ 00:09:10.444 { 00:09:10.444 "name": "BaseBdev1", 00:09:10.444 "uuid": "68996b66-8071-4b7e-ad04-3eac7a5a6d09", 00:09:10.444 "is_configured": true, 00:09:10.444 "data_offset": 0, 00:09:10.444 "data_size": 65536 00:09:10.444 }, 00:09:10.444 { 00:09:10.444 "name": "BaseBdev2", 00:09:10.444 "uuid": "98245e93-9517-4bdf-b4cf-e3c315e16bfe", 00:09:10.444 "is_configured": true, 00:09:10.444 "data_offset": 0, 00:09:10.444 "data_size": 65536 00:09:10.444 } 00:09:10.444 ] 00:09:10.444 } 00:09:10.444 } 00:09:10.444 }' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:10.444 BaseBdev2' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.444 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 [2024-11-20 08:42:41.384607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.750 [2024-11-20 08:42:41.384659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.750 [2024-11-20 08:42:41.384736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.750 "name": "Existed_Raid", 00:09:10.750 "uuid": "ed55beba-30ee-4c2f-ad8f-b837c4f25726", 00:09:10.750 "strip_size_kb": 64, 00:09:10.750 "state": "offline", 00:09:10.750 "raid_level": "raid0", 00:09:10.750 "superblock": false, 00:09:10.750 "num_base_bdevs": 2, 00:09:10.750 "num_base_bdevs_discovered": 1, 00:09:10.750 "num_base_bdevs_operational": 1, 00:09:10.750 "base_bdevs_list": [ 00:09:10.750 { 00:09:10.750 "name": null, 00:09:10.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.750 "is_configured": false, 00:09:10.750 "data_offset": 0, 00:09:10.750 "data_size": 65536 00:09:10.750 }, 00:09:10.750 { 00:09:10.750 "name": "BaseBdev2", 00:09:10.750 "uuid": "98245e93-9517-4bdf-b4cf-e3c315e16bfe", 00:09:10.750 "is_configured": true, 00:09:10.750 "data_offset": 0, 00:09:10.750 "data_size": 65536 00:09:10.750 } 00:09:10.750 ] 00:09:10.750 }' 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.750 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.335 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.335 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.335 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.335 08:42:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.335 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.335 08:42:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.335 [2024-11-20 08:42:42.050762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.335 [2024-11-20 08:42:42.050841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60632 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60632 ']' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60632 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60632 00:09:11.335 killing process with pid 60632 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60632' 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60632 00:09:11.335 [2024-11-20 08:42:42.225601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.335 08:42:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60632 00:09:11.335 [2024-11-20 08:42:42.241827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.713 00:09:12.713 real 0m5.538s 00:09:12.713 user 0m8.352s 00:09:12.713 sys 0m0.767s 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.713 ************************************ 00:09:12.713 END TEST raid_state_function_test 00:09:12.713 ************************************ 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.713 08:42:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:12.713 08:42:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.713 08:42:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.713 08:42:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.713 ************************************ 00:09:12.713 START TEST raid_state_function_test_sb 00:09:12.713 ************************************ 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.713 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.714 Process raid pid: 60890 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60890 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60890' 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60890 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60890 ']' 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.714 08:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.714 [2024-11-20 08:42:43.466075] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:12.714 [2024-11-20 08:42:43.466302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.973 [2024-11-20 08:42:43.660091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.973 [2024-11-20 08:42:43.823862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.231 [2024-11-20 08:42:44.021224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.231 [2024-11-20 08:42:44.021277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.799 [2024-11-20 08:42:44.486860] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.799 [2024-11-20 08:42:44.486941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.799 [2024-11-20 08:42:44.486958] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.799 [2024-11-20 08:42:44.486974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.799 "name": "Existed_Raid", 00:09:13.799 "uuid": "316a4ece-c980-411f-91ad-86a3baf781de", 00:09:13.799 "strip_size_kb": 64, 00:09:13.799 "state": "configuring", 00:09:13.799 "raid_level": "raid0", 00:09:13.799 "superblock": true, 00:09:13.799 "num_base_bdevs": 2, 00:09:13.799 "num_base_bdevs_discovered": 0, 00:09:13.799 "num_base_bdevs_operational": 2, 00:09:13.799 "base_bdevs_list": [ 00:09:13.799 { 00:09:13.799 "name": "BaseBdev1", 00:09:13.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.799 "is_configured": false, 00:09:13.799 "data_offset": 0, 00:09:13.799 "data_size": 0 00:09:13.799 }, 00:09:13.799 { 00:09:13.799 "name": "BaseBdev2", 00:09:13.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.799 "is_configured": false, 00:09:13.799 "data_offset": 0, 00:09:13.799 "data_size": 0 00:09:13.799 } 00:09:13.799 ] 00:09:13.799 }' 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.799 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.367 [2024-11-20 08:42:44.994981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.367 [2024-11-20 08:42:44.995020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.367 08:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 [2024-11-20 08:42:45.002963] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.368 [2024-11-20 08:42:45.003030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.368 [2024-11-20 08:42:45.003060] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.368 [2024-11-20 08:42:45.003077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 [2024-11-20 08:42:45.045590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.368 BaseBdev1 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 [ 00:09:14.368 { 00:09:14.368 "name": "BaseBdev1", 00:09:14.368 "aliases": [ 00:09:14.368 "4ca518f3-61a7-43c0-9ad3-b246ddf37e53" 00:09:14.368 ], 00:09:14.368 "product_name": "Malloc disk", 00:09:14.368 "block_size": 512, 00:09:14.368 "num_blocks": 65536, 00:09:14.368 "uuid": "4ca518f3-61a7-43c0-9ad3-b246ddf37e53", 00:09:14.368 "assigned_rate_limits": { 00:09:14.368 "rw_ios_per_sec": 0, 00:09:14.368 "rw_mbytes_per_sec": 0, 00:09:14.368 "r_mbytes_per_sec": 0, 00:09:14.368 "w_mbytes_per_sec": 0 00:09:14.368 }, 00:09:14.368 "claimed": true, 00:09:14.368 "claim_type": "exclusive_write", 00:09:14.368 "zoned": false, 00:09:14.368 "supported_io_types": { 00:09:14.368 "read": true, 00:09:14.368 "write": true, 00:09:14.368 "unmap": true, 00:09:14.368 "flush": true, 00:09:14.368 "reset": true, 00:09:14.368 "nvme_admin": false, 00:09:14.368 "nvme_io": false, 00:09:14.368 "nvme_io_md": false, 00:09:14.368 "write_zeroes": true, 00:09:14.368 "zcopy": true, 00:09:14.368 "get_zone_info": false, 00:09:14.368 "zone_management": false, 00:09:14.368 "zone_append": false, 00:09:14.368 "compare": false, 00:09:14.368 "compare_and_write": false, 00:09:14.368 "abort": true, 00:09:14.368 "seek_hole": false, 00:09:14.368 "seek_data": false, 00:09:14.368 "copy": true, 00:09:14.368 "nvme_iov_md": false 00:09:14.368 }, 00:09:14.368 "memory_domains": [ 00:09:14.368 { 00:09:14.368 "dma_device_id": "system", 00:09:14.368 "dma_device_type": 1 00:09:14.368 }, 00:09:14.368 { 00:09:14.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.368 "dma_device_type": 2 00:09:14.368 } 00:09:14.368 ], 00:09:14.368 "driver_specific": {} 00:09:14.368 } 00:09:14.368 ] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.368 "name": "Existed_Raid", 00:09:14.368 "uuid": "c09f0486-55a1-444e-817f-e685cb913a5c", 00:09:14.368 "strip_size_kb": 64, 00:09:14.368 "state": "configuring", 00:09:14.368 "raid_level": "raid0", 00:09:14.368 "superblock": true, 00:09:14.368 "num_base_bdevs": 2, 00:09:14.368 "num_base_bdevs_discovered": 1, 00:09:14.368 "num_base_bdevs_operational": 2, 00:09:14.368 "base_bdevs_list": [ 00:09:14.368 { 00:09:14.368 "name": "BaseBdev1", 00:09:14.368 "uuid": "4ca518f3-61a7-43c0-9ad3-b246ddf37e53", 00:09:14.368 "is_configured": true, 00:09:14.368 "data_offset": 2048, 00:09:14.368 "data_size": 63488 00:09:14.368 }, 00:09:14.368 { 00:09:14.368 "name": "BaseBdev2", 00:09:14.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.368 "is_configured": false, 00:09:14.368 "data_offset": 0, 00:09:14.368 "data_size": 0 00:09:14.368 } 00:09:14.368 ] 00:09:14.368 }' 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.368 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.937 [2024-11-20 08:42:45.593854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.937 [2024-11-20 08:42:45.593915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.937 [2024-11-20 08:42:45.601884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.937 [2024-11-20 08:42:45.604408] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.937 [2024-11-20 08:42:45.604466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.937 "name": "Existed_Raid", 00:09:14.937 "uuid": "b3dc52e8-803a-4e4a-a80d-0a77624a7a98", 00:09:14.937 "strip_size_kb": 64, 00:09:14.937 "state": "configuring", 00:09:14.937 "raid_level": "raid0", 00:09:14.937 "superblock": true, 00:09:14.937 "num_base_bdevs": 2, 00:09:14.937 "num_base_bdevs_discovered": 1, 00:09:14.937 "num_base_bdevs_operational": 2, 00:09:14.937 "base_bdevs_list": [ 00:09:14.937 { 00:09:14.937 "name": "BaseBdev1", 00:09:14.937 "uuid": "4ca518f3-61a7-43c0-9ad3-b246ddf37e53", 00:09:14.937 "is_configured": true, 00:09:14.937 "data_offset": 2048, 00:09:14.937 "data_size": 63488 00:09:14.937 }, 00:09:14.937 { 00:09:14.937 "name": "BaseBdev2", 00:09:14.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.937 "is_configured": false, 00:09:14.937 "data_offset": 0, 00:09:14.937 "data_size": 0 00:09:14.937 } 00:09:14.937 ] 00:09:14.937 }' 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.937 08:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.196 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.196 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.196 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.456 [2024-11-20 08:42:46.136774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.456 [2024-11-20 08:42:46.137426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.456 [2024-11-20 08:42:46.137452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:15.456 BaseBdev2 00:09:15.456 [2024-11-20 08:42:46.137828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:15.456 [2024-11-20 08:42:46.138035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.456 [2024-11-20 08:42:46.138056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.456 [2024-11-20 08:42:46.138245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.456 [ 00:09:15.456 { 00:09:15.456 "name": "BaseBdev2", 00:09:15.456 "aliases": [ 00:09:15.456 "ec795ed2-0250-4097-a11b-68363e9ecfc9" 00:09:15.456 ], 00:09:15.456 "product_name": "Malloc disk", 00:09:15.456 "block_size": 512, 00:09:15.456 "num_blocks": 65536, 00:09:15.456 "uuid": "ec795ed2-0250-4097-a11b-68363e9ecfc9", 00:09:15.456 "assigned_rate_limits": { 00:09:15.456 "rw_ios_per_sec": 0, 00:09:15.456 "rw_mbytes_per_sec": 0, 00:09:15.456 "r_mbytes_per_sec": 0, 00:09:15.456 "w_mbytes_per_sec": 0 00:09:15.456 }, 00:09:15.456 "claimed": true, 00:09:15.456 "claim_type": "exclusive_write", 00:09:15.456 "zoned": false, 00:09:15.456 "supported_io_types": { 00:09:15.456 "read": true, 00:09:15.456 "write": true, 00:09:15.456 "unmap": true, 00:09:15.456 "flush": true, 00:09:15.456 "reset": true, 00:09:15.456 "nvme_admin": false, 00:09:15.456 "nvme_io": false, 00:09:15.456 "nvme_io_md": false, 00:09:15.456 "write_zeroes": true, 00:09:15.456 "zcopy": true, 00:09:15.456 "get_zone_info": false, 00:09:15.456 "zone_management": false, 00:09:15.456 "zone_append": false, 00:09:15.456 "compare": false, 00:09:15.456 "compare_and_write": false, 00:09:15.456 "abort": true, 00:09:15.456 "seek_hole": false, 00:09:15.456 "seek_data": false, 00:09:15.456 "copy": true, 00:09:15.456 "nvme_iov_md": false 00:09:15.456 }, 00:09:15.456 "memory_domains": [ 00:09:15.456 { 00:09:15.456 "dma_device_id": "system", 00:09:15.456 "dma_device_type": 1 00:09:15.456 }, 00:09:15.456 { 00:09:15.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.456 "dma_device_type": 2 00:09:15.456 } 00:09:15.456 ], 00:09:15.456 "driver_specific": {} 00:09:15.456 } 00:09:15.456 ] 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.456 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.456 "name": "Existed_Raid", 00:09:15.456 "uuid": "b3dc52e8-803a-4e4a-a80d-0a77624a7a98", 00:09:15.456 "strip_size_kb": 64, 00:09:15.456 "state": "online", 00:09:15.456 "raid_level": "raid0", 00:09:15.456 "superblock": true, 00:09:15.456 "num_base_bdevs": 2, 00:09:15.456 "num_base_bdevs_discovered": 2, 00:09:15.456 "num_base_bdevs_operational": 2, 00:09:15.456 "base_bdevs_list": [ 00:09:15.456 { 00:09:15.457 "name": "BaseBdev1", 00:09:15.457 "uuid": "4ca518f3-61a7-43c0-9ad3-b246ddf37e53", 00:09:15.457 "is_configured": true, 00:09:15.457 "data_offset": 2048, 00:09:15.457 "data_size": 63488 00:09:15.457 }, 00:09:15.457 { 00:09:15.457 "name": "BaseBdev2", 00:09:15.457 "uuid": "ec795ed2-0250-4097-a11b-68363e9ecfc9", 00:09:15.457 "is_configured": true, 00:09:15.457 "data_offset": 2048, 00:09:15.457 "data_size": 63488 00:09:15.457 } 00:09:15.457 ] 00:09:15.457 }' 00:09:15.457 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.457 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.025 [2024-11-20 08:42:46.697386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.025 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.025 "name": "Existed_Raid", 00:09:16.025 "aliases": [ 00:09:16.025 "b3dc52e8-803a-4e4a-a80d-0a77624a7a98" 00:09:16.025 ], 00:09:16.025 "product_name": "Raid Volume", 00:09:16.025 "block_size": 512, 00:09:16.025 "num_blocks": 126976, 00:09:16.025 "uuid": "b3dc52e8-803a-4e4a-a80d-0a77624a7a98", 00:09:16.025 "assigned_rate_limits": { 00:09:16.025 "rw_ios_per_sec": 0, 00:09:16.025 "rw_mbytes_per_sec": 0, 00:09:16.025 "r_mbytes_per_sec": 0, 00:09:16.025 "w_mbytes_per_sec": 0 00:09:16.025 }, 00:09:16.025 "claimed": false, 00:09:16.025 "zoned": false, 00:09:16.025 "supported_io_types": { 00:09:16.025 "read": true, 00:09:16.025 "write": true, 00:09:16.025 "unmap": true, 00:09:16.025 "flush": true, 00:09:16.025 "reset": true, 00:09:16.025 "nvme_admin": false, 00:09:16.025 "nvme_io": false, 00:09:16.025 "nvme_io_md": false, 00:09:16.025 "write_zeroes": true, 00:09:16.025 "zcopy": false, 00:09:16.025 "get_zone_info": false, 00:09:16.025 "zone_management": false, 00:09:16.025 "zone_append": false, 00:09:16.025 "compare": false, 00:09:16.025 "compare_and_write": false, 00:09:16.025 "abort": false, 00:09:16.025 "seek_hole": false, 00:09:16.025 "seek_data": false, 00:09:16.025 "copy": false, 00:09:16.025 "nvme_iov_md": false 00:09:16.025 }, 00:09:16.025 "memory_domains": [ 00:09:16.025 { 00:09:16.025 "dma_device_id": "system", 00:09:16.025 "dma_device_type": 1 00:09:16.025 }, 00:09:16.025 { 00:09:16.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.025 "dma_device_type": 2 00:09:16.025 }, 00:09:16.025 { 00:09:16.025 "dma_device_id": "system", 00:09:16.025 "dma_device_type": 1 00:09:16.025 }, 00:09:16.025 { 00:09:16.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.025 "dma_device_type": 2 00:09:16.025 } 00:09:16.025 ], 00:09:16.025 "driver_specific": { 00:09:16.025 "raid": { 00:09:16.025 "uuid": "b3dc52e8-803a-4e4a-a80d-0a77624a7a98", 00:09:16.025 "strip_size_kb": 64, 00:09:16.025 "state": "online", 00:09:16.025 "raid_level": "raid0", 00:09:16.025 "superblock": true, 00:09:16.025 "num_base_bdevs": 2, 00:09:16.025 "num_base_bdevs_discovered": 2, 00:09:16.025 "num_base_bdevs_operational": 2, 00:09:16.025 "base_bdevs_list": [ 00:09:16.025 { 00:09:16.025 "name": "BaseBdev1", 00:09:16.025 "uuid": "4ca518f3-61a7-43c0-9ad3-b246ddf37e53", 00:09:16.025 "is_configured": true, 00:09:16.026 "data_offset": 2048, 00:09:16.026 "data_size": 63488 00:09:16.026 }, 00:09:16.026 { 00:09:16.026 "name": "BaseBdev2", 00:09:16.026 "uuid": "ec795ed2-0250-4097-a11b-68363e9ecfc9", 00:09:16.026 "is_configured": true, 00:09:16.026 "data_offset": 2048, 00:09:16.026 "data_size": 63488 00:09:16.026 } 00:09:16.026 ] 00:09:16.026 } 00:09:16.026 } 00:09:16.026 }' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.026 BaseBdev2' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.026 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.352 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.352 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.352 08:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.352 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.352 08:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.352 [2024-11-20 08:42:46.969117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.352 [2024-11-20 08:42:46.969185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.352 [2024-11-20 08:42:46.969252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.352 "name": "Existed_Raid", 00:09:16.352 "uuid": "b3dc52e8-803a-4e4a-a80d-0a77624a7a98", 00:09:16.352 "strip_size_kb": 64, 00:09:16.352 "state": "offline", 00:09:16.352 "raid_level": "raid0", 00:09:16.352 "superblock": true, 00:09:16.352 "num_base_bdevs": 2, 00:09:16.352 "num_base_bdevs_discovered": 1, 00:09:16.352 "num_base_bdevs_operational": 1, 00:09:16.352 "base_bdevs_list": [ 00:09:16.352 { 00:09:16.352 "name": null, 00:09:16.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.352 "is_configured": false, 00:09:16.352 "data_offset": 0, 00:09:16.352 "data_size": 63488 00:09:16.352 }, 00:09:16.352 { 00:09:16.352 "name": "BaseBdev2", 00:09:16.352 "uuid": "ec795ed2-0250-4097-a11b-68363e9ecfc9", 00:09:16.352 "is_configured": true, 00:09:16.352 "data_offset": 2048, 00:09:16.352 "data_size": 63488 00:09:16.352 } 00:09:16.352 ] 00:09:16.352 }' 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.352 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.934 [2024-11-20 08:42:47.635307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.934 [2024-11-20 08:42:47.635373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60890 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60890 ']' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60890 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60890 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.934 killing process with pid 60890 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60890' 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60890 00:09:16.934 [2024-11-20 08:42:47.813079] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.934 08:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60890 00:09:16.934 [2024-11-20 08:42:47.828508] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.314 08:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.314 00:09:18.314 real 0m5.524s 00:09:18.314 user 0m8.351s 00:09:18.314 sys 0m0.789s 00:09:18.314 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.314 ************************************ 00:09:18.314 END TEST raid_state_function_test_sb 00:09:18.314 ************************************ 00:09:18.314 08:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.314 08:42:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:18.314 08:42:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:18.314 08:42:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.314 08:42:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.314 ************************************ 00:09:18.314 START TEST raid_superblock_test 00:09:18.314 ************************************ 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61148 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61148 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61148 ']' 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.314 08:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.314 [2024-11-20 08:42:49.051897] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:18.314 [2024-11-20 08:42:49.052370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:09:18.573 [2024-11-20 08:42:49.237287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.573 [2024-11-20 08:42:49.368973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.831 [2024-11-20 08:42:49.575663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.831 [2024-11-20 08:42:49.575841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.399 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.399 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:19.399 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:19.399 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.399 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 malloc1 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 [2024-11-20 08:42:50.152191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.400 [2024-11-20 08:42:50.152276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.400 [2024-11-20 08:42:50.152313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:19.400 [2024-11-20 08:42:50.152329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.400 [2024-11-20 08:42:50.155093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.400 [2024-11-20 08:42:50.155285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.400 pt1 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 malloc2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 [2024-11-20 08:42:50.208239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.400 [2024-11-20 08:42:50.208321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.400 [2024-11-20 08:42:50.208356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:19.400 [2024-11-20 08:42:50.208381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.400 [2024-11-20 08:42:50.211313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.400 [2024-11-20 08:42:50.211368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.400 pt2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 [2024-11-20 08:42:50.220429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.400 [2024-11-20 08:42:50.222905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.400 [2024-11-20 08:42:50.223295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:19.400 [2024-11-20 08:42:50.223321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:19.400 [2024-11-20 08:42:50.223698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:19.400 [2024-11-20 08:42:50.223903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:19.400 [2024-11-20 08:42:50.223926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:19.400 [2024-11-20 08:42:50.224161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.400 "name": "raid_bdev1", 00:09:19.400 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:19.400 "strip_size_kb": 64, 00:09:19.400 "state": "online", 00:09:19.400 "raid_level": "raid0", 00:09:19.400 "superblock": true, 00:09:19.400 "num_base_bdevs": 2, 00:09:19.400 "num_base_bdevs_discovered": 2, 00:09:19.400 "num_base_bdevs_operational": 2, 00:09:19.400 "base_bdevs_list": [ 00:09:19.400 { 00:09:19.400 "name": "pt1", 00:09:19.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.400 "is_configured": true, 00:09:19.400 "data_offset": 2048, 00:09:19.400 "data_size": 63488 00:09:19.400 }, 00:09:19.400 { 00:09:19.400 "name": "pt2", 00:09:19.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.400 "is_configured": true, 00:09:19.400 "data_offset": 2048, 00:09:19.400 "data_size": 63488 00:09:19.400 } 00:09:19.400 ] 00:09:19.400 }' 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.400 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.978 [2024-11-20 08:42:50.720870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.978 "name": "raid_bdev1", 00:09:19.978 "aliases": [ 00:09:19.978 "66404939-3a83-4c1b-bfb6-98e4e2117f2a" 00:09:19.978 ], 00:09:19.978 "product_name": "Raid Volume", 00:09:19.978 "block_size": 512, 00:09:19.978 "num_blocks": 126976, 00:09:19.978 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:19.978 "assigned_rate_limits": { 00:09:19.978 "rw_ios_per_sec": 0, 00:09:19.978 "rw_mbytes_per_sec": 0, 00:09:19.978 "r_mbytes_per_sec": 0, 00:09:19.978 "w_mbytes_per_sec": 0 00:09:19.978 }, 00:09:19.978 "claimed": false, 00:09:19.978 "zoned": false, 00:09:19.978 "supported_io_types": { 00:09:19.978 "read": true, 00:09:19.978 "write": true, 00:09:19.978 "unmap": true, 00:09:19.978 "flush": true, 00:09:19.978 "reset": true, 00:09:19.978 "nvme_admin": false, 00:09:19.978 "nvme_io": false, 00:09:19.978 "nvme_io_md": false, 00:09:19.978 "write_zeroes": true, 00:09:19.978 "zcopy": false, 00:09:19.978 "get_zone_info": false, 00:09:19.978 "zone_management": false, 00:09:19.978 "zone_append": false, 00:09:19.978 "compare": false, 00:09:19.978 "compare_and_write": false, 00:09:19.978 "abort": false, 00:09:19.978 "seek_hole": false, 00:09:19.978 "seek_data": false, 00:09:19.978 "copy": false, 00:09:19.978 "nvme_iov_md": false 00:09:19.978 }, 00:09:19.978 "memory_domains": [ 00:09:19.978 { 00:09:19.978 "dma_device_id": "system", 00:09:19.978 "dma_device_type": 1 00:09:19.978 }, 00:09:19.978 { 00:09:19.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.978 "dma_device_type": 2 00:09:19.978 }, 00:09:19.978 { 00:09:19.978 "dma_device_id": "system", 00:09:19.978 "dma_device_type": 1 00:09:19.978 }, 00:09:19.978 { 00:09:19.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.978 "dma_device_type": 2 00:09:19.978 } 00:09:19.978 ], 00:09:19.978 "driver_specific": { 00:09:19.978 "raid": { 00:09:19.978 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:19.978 "strip_size_kb": 64, 00:09:19.978 "state": "online", 00:09:19.978 "raid_level": "raid0", 00:09:19.978 "superblock": true, 00:09:19.978 "num_base_bdevs": 2, 00:09:19.978 "num_base_bdevs_discovered": 2, 00:09:19.978 "num_base_bdevs_operational": 2, 00:09:19.978 "base_bdevs_list": [ 00:09:19.978 { 00:09:19.978 "name": "pt1", 00:09:19.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.978 "is_configured": true, 00:09:19.978 "data_offset": 2048, 00:09:19.978 "data_size": 63488 00:09:19.978 }, 00:09:19.978 { 00:09:19.978 "name": "pt2", 00:09:19.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.978 "is_configured": true, 00:09:19.978 "data_offset": 2048, 00:09:19.978 "data_size": 63488 00:09:19.978 } 00:09:19.978 ] 00:09:19.978 } 00:09:19.978 } 00:09:19.978 }' 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:19.978 pt2' 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.978 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 [2024-11-20 08:42:50.976887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.237 08:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66404939-3a83-4c1b-bfb6-98e4e2117f2a 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66404939-3a83-4c1b-bfb6-98e4e2117f2a ']' 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 [2024-11-20 08:42:51.024509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.237 [2024-11-20 08:42:51.024536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.237 [2024-11-20 08:42:51.024623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.237 [2024-11-20 08:42:51.024701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.237 [2024-11-20 08:42:51.024723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.237 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.496 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.496 [2024-11-20 08:42:51.160613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:20.496 [2024-11-20 08:42:51.163344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:20.496 [2024-11-20 08:42:51.163573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:20.496 [2024-11-20 08:42:51.163803] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:20.496 [2024-11-20 08:42:51.164017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.496 [2024-11-20 08:42:51.164073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:20.496 request: 00:09:20.497 { 00:09:20.497 "name": "raid_bdev1", 00:09:20.497 "raid_level": "raid0", 00:09:20.497 "base_bdevs": [ 00:09:20.497 "malloc1", 00:09:20.497 "malloc2" 00:09:20.497 ], 00:09:20.497 "strip_size_kb": 64, 00:09:20.497 "superblock": false, 00:09:20.497 "method": "bdev_raid_create", 00:09:20.497 "req_id": 1 00:09:20.497 } 00:09:20.497 Got JSON-RPC error response 00:09:20.497 response: 00:09:20.497 { 00:09:20.497 "code": -17, 00:09:20.497 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:20.497 } 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.497 [2024-11-20 08:42:51.220762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.497 [2024-11-20 08:42:51.220838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.497 [2024-11-20 08:42:51.220869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:20.497 [2024-11-20 08:42:51.220886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.497 [2024-11-20 08:42:51.223747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.497 [2024-11-20 08:42:51.223922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.497 [2024-11-20 08:42:51.224040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.497 [2024-11-20 08:42:51.224120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.497 pt1 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.497 "name": "raid_bdev1", 00:09:20.497 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:20.497 "strip_size_kb": 64, 00:09:20.497 "state": "configuring", 00:09:20.497 "raid_level": "raid0", 00:09:20.497 "superblock": true, 00:09:20.497 "num_base_bdevs": 2, 00:09:20.497 "num_base_bdevs_discovered": 1, 00:09:20.497 "num_base_bdevs_operational": 2, 00:09:20.497 "base_bdevs_list": [ 00:09:20.497 { 00:09:20.497 "name": "pt1", 00:09:20.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.497 "is_configured": true, 00:09:20.497 "data_offset": 2048, 00:09:20.497 "data_size": 63488 00:09:20.497 }, 00:09:20.497 { 00:09:20.497 "name": null, 00:09:20.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.497 "is_configured": false, 00:09:20.497 "data_offset": 2048, 00:09:20.497 "data_size": 63488 00:09:20.497 } 00:09:20.497 ] 00:09:20.497 }' 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.497 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.066 [2024-11-20 08:42:51.728935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.066 [2024-11-20 08:42:51.729179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.066 [2024-11-20 08:42:51.729220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:21.066 [2024-11-20 08:42:51.729240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.066 [2024-11-20 08:42:51.729819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.066 [2024-11-20 08:42:51.729868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.066 [2024-11-20 08:42:51.729975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.066 [2024-11-20 08:42:51.730019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.066 [2024-11-20 08:42:51.730177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.066 [2024-11-20 08:42:51.730200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:21.066 [2024-11-20 08:42:51.730500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.066 [2024-11-20 08:42:51.730689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.066 [2024-11-20 08:42:51.730706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:21.066 [2024-11-20 08:42:51.730874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.066 pt2 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.066 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.066 "name": "raid_bdev1", 00:09:21.066 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:21.066 "strip_size_kb": 64, 00:09:21.066 "state": "online", 00:09:21.066 "raid_level": "raid0", 00:09:21.066 "superblock": true, 00:09:21.066 "num_base_bdevs": 2, 00:09:21.066 "num_base_bdevs_discovered": 2, 00:09:21.066 "num_base_bdevs_operational": 2, 00:09:21.066 "base_bdevs_list": [ 00:09:21.066 { 00:09:21.066 "name": "pt1", 00:09:21.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.067 "is_configured": true, 00:09:21.067 "data_offset": 2048, 00:09:21.067 "data_size": 63488 00:09:21.067 }, 00:09:21.067 { 00:09:21.067 "name": "pt2", 00:09:21.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.067 "is_configured": true, 00:09:21.067 "data_offset": 2048, 00:09:21.067 "data_size": 63488 00:09:21.067 } 00:09:21.067 ] 00:09:21.067 }' 00:09:21.067 08:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.067 08:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.635 [2024-11-20 08:42:52.269388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.635 "name": "raid_bdev1", 00:09:21.635 "aliases": [ 00:09:21.635 "66404939-3a83-4c1b-bfb6-98e4e2117f2a" 00:09:21.635 ], 00:09:21.635 "product_name": "Raid Volume", 00:09:21.635 "block_size": 512, 00:09:21.635 "num_blocks": 126976, 00:09:21.635 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:21.635 "assigned_rate_limits": { 00:09:21.635 "rw_ios_per_sec": 0, 00:09:21.635 "rw_mbytes_per_sec": 0, 00:09:21.635 "r_mbytes_per_sec": 0, 00:09:21.635 "w_mbytes_per_sec": 0 00:09:21.635 }, 00:09:21.635 "claimed": false, 00:09:21.635 "zoned": false, 00:09:21.635 "supported_io_types": { 00:09:21.635 "read": true, 00:09:21.635 "write": true, 00:09:21.635 "unmap": true, 00:09:21.635 "flush": true, 00:09:21.635 "reset": true, 00:09:21.635 "nvme_admin": false, 00:09:21.635 "nvme_io": false, 00:09:21.635 "nvme_io_md": false, 00:09:21.635 "write_zeroes": true, 00:09:21.635 "zcopy": false, 00:09:21.635 "get_zone_info": false, 00:09:21.635 "zone_management": false, 00:09:21.635 "zone_append": false, 00:09:21.635 "compare": false, 00:09:21.635 "compare_and_write": false, 00:09:21.635 "abort": false, 00:09:21.635 "seek_hole": false, 00:09:21.635 "seek_data": false, 00:09:21.635 "copy": false, 00:09:21.635 "nvme_iov_md": false 00:09:21.635 }, 00:09:21.635 "memory_domains": [ 00:09:21.635 { 00:09:21.635 "dma_device_id": "system", 00:09:21.635 "dma_device_type": 1 00:09:21.635 }, 00:09:21.635 { 00:09:21.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.635 "dma_device_type": 2 00:09:21.635 }, 00:09:21.635 { 00:09:21.635 "dma_device_id": "system", 00:09:21.635 "dma_device_type": 1 00:09:21.635 }, 00:09:21.635 { 00:09:21.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.635 "dma_device_type": 2 00:09:21.635 } 00:09:21.635 ], 00:09:21.635 "driver_specific": { 00:09:21.635 "raid": { 00:09:21.635 "uuid": "66404939-3a83-4c1b-bfb6-98e4e2117f2a", 00:09:21.635 "strip_size_kb": 64, 00:09:21.635 "state": "online", 00:09:21.635 "raid_level": "raid0", 00:09:21.635 "superblock": true, 00:09:21.635 "num_base_bdevs": 2, 00:09:21.635 "num_base_bdevs_discovered": 2, 00:09:21.635 "num_base_bdevs_operational": 2, 00:09:21.635 "base_bdevs_list": [ 00:09:21.635 { 00:09:21.635 "name": "pt1", 00:09:21.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.635 "is_configured": true, 00:09:21.635 "data_offset": 2048, 00:09:21.635 "data_size": 63488 00:09:21.635 }, 00:09:21.635 { 00:09:21.635 "name": "pt2", 00:09:21.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.635 "is_configured": true, 00:09:21.635 "data_offset": 2048, 00:09:21.635 "data_size": 63488 00:09:21.635 } 00:09:21.635 ] 00:09:21.635 } 00:09:21.635 } 00:09:21.635 }' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.635 pt2' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.635 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.635 [2024-11-20 08:42:52.541456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66404939-3a83-4c1b-bfb6-98e4e2117f2a '!=' 66404939-3a83-4c1b-bfb6-98e4e2117f2a ']' 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61148 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61148 ']' 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61148 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61148 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.894 killing process with pid 61148 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61148' 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61148 00:09:21.894 [2024-11-20 08:42:52.629333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.894 08:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61148 00:09:21.894 [2024-11-20 08:42:52.629447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.894 [2024-11-20 08:42:52.629514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.894 [2024-11-20 08:42:52.629535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:22.152 [2024-11-20 08:42:52.814216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.087 08:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:23.087 00:09:23.087 real 0m4.921s 00:09:23.087 user 0m7.281s 00:09:23.087 sys 0m0.724s 00:09:23.087 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.087 ************************************ 00:09:23.087 END TEST raid_superblock_test 00:09:23.087 ************************************ 00:09:23.087 08:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.087 08:42:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:23.087 08:42:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.087 08:42:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.087 08:42:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.087 ************************************ 00:09:23.087 START TEST raid_read_error_test 00:09:23.087 ************************************ 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.087 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YWzzCRbnKh 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61360 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61360 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61360 ']' 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.088 08:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.347 [2024-11-20 08:42:54.001761] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:23.347 [2024-11-20 08:42:54.001975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61360 ] 00:09:23.347 [2024-11-20 08:42:54.182075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.607 [2024-11-20 08:42:54.318844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.866 [2024-11-20 08:42:54.528946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.866 [2024-11-20 08:42:54.529029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.124 BaseBdev1_malloc 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.124 true 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.124 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.124 [2024-11-20 08:42:54.982923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.124 [2024-11-20 08:42:54.983008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.124 [2024-11-20 08:42:54.983037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.124 [2024-11-20 08:42:54.983054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.124 [2024-11-20 08:42:54.986015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.124 [2024-11-20 08:42:54.986244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.124 BaseBdev1 00:09:24.125 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.125 08:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.125 08:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.125 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.125 08:42:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.125 BaseBdev2_malloc 00:09:24.125 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.125 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.125 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.125 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 true 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 [2024-11-20 08:42:55.044927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.384 [2024-11-20 08:42:55.045199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.384 [2024-11-20 08:42:55.045236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.384 [2024-11-20 08:42:55.045255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.384 [2024-11-20 08:42:55.048172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.384 [2024-11-20 08:42:55.048258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.384 BaseBdev2 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 [2024-11-20 08:42:55.053126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.384 [2024-11-20 08:42:55.055657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.384 [2024-11-20 08:42:55.056119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.384 [2024-11-20 08:42:55.056169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:24.384 [2024-11-20 08:42:55.056510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:24.384 [2024-11-20 08:42:55.056763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.384 [2024-11-20 08:42:55.056783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.384 [2024-11-20 08:42:55.057061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.384 "name": "raid_bdev1", 00:09:24.384 "uuid": "4b1a2502-06b0-409e-8a14-704f78381efc", 00:09:24.384 "strip_size_kb": 64, 00:09:24.384 "state": "online", 00:09:24.384 "raid_level": "raid0", 00:09:24.384 "superblock": true, 00:09:24.384 "num_base_bdevs": 2, 00:09:24.384 "num_base_bdevs_discovered": 2, 00:09:24.384 "num_base_bdevs_operational": 2, 00:09:24.384 "base_bdevs_list": [ 00:09:24.384 { 00:09:24.384 "name": "BaseBdev1", 00:09:24.384 "uuid": "9a4f92d4-74e7-5886-ae90-8df4cce7b43c", 00:09:24.384 "is_configured": true, 00:09:24.384 "data_offset": 2048, 00:09:24.384 "data_size": 63488 00:09:24.384 }, 00:09:24.384 { 00:09:24.384 "name": "BaseBdev2", 00:09:24.384 "uuid": "65ba8cc0-2187-56c7-a9cf-1c28f2741fb4", 00:09:24.384 "is_configured": true, 00:09:24.384 "data_offset": 2048, 00:09:24.384 "data_size": 63488 00:09:24.384 } 00:09:24.384 ] 00:09:24.384 }' 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.384 08:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.693 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.693 08:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.967 [2024-11-20 08:42:55.714771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.903 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.903 "name": "raid_bdev1", 00:09:25.903 "uuid": "4b1a2502-06b0-409e-8a14-704f78381efc", 00:09:25.903 "strip_size_kb": 64, 00:09:25.903 "state": "online", 00:09:25.903 "raid_level": "raid0", 00:09:25.903 "superblock": true, 00:09:25.903 "num_base_bdevs": 2, 00:09:25.903 "num_base_bdevs_discovered": 2, 00:09:25.903 "num_base_bdevs_operational": 2, 00:09:25.903 "base_bdevs_list": [ 00:09:25.903 { 00:09:25.903 "name": "BaseBdev1", 00:09:25.903 "uuid": "9a4f92d4-74e7-5886-ae90-8df4cce7b43c", 00:09:25.903 "is_configured": true, 00:09:25.903 "data_offset": 2048, 00:09:25.903 "data_size": 63488 00:09:25.903 }, 00:09:25.903 { 00:09:25.904 "name": "BaseBdev2", 00:09:25.904 "uuid": "65ba8cc0-2187-56c7-a9cf-1c28f2741fb4", 00:09:25.904 "is_configured": true, 00:09:25.904 "data_offset": 2048, 00:09:25.904 "data_size": 63488 00:09:25.904 } 00:09:25.904 ] 00:09:25.904 }' 00:09:25.904 08:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.904 08:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.470 08:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.470 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.471 [2024-11-20 08:42:57.109728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.471 [2024-11-20 08:42:57.109770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.471 [2024-11-20 08:42:57.113296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.471 [2024-11-20 08:42:57.113355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.471 [2024-11-20 08:42:57.113410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.471 [2024-11-20 08:42:57.113429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:26.471 { 00:09:26.471 "results": [ 00:09:26.471 { 00:09:26.471 "job": "raid_bdev1", 00:09:26.471 "core_mask": "0x1", 00:09:26.471 "workload": "randrw", 00:09:26.471 "percentage": 50, 00:09:26.471 "status": "finished", 00:09:26.471 "queue_depth": 1, 00:09:26.471 "io_size": 131072, 00:09:26.471 "runtime": 1.392508, 00:09:26.471 "iops": 10604.606939421534, 00:09:26.471 "mibps": 1325.5758674276917, 00:09:26.471 "io_failed": 1, 00:09:26.471 "io_timeout": 0, 00:09:26.471 "avg_latency_us": 132.0159401162218, 00:09:26.471 "min_latency_us": 41.658181818181816, 00:09:26.471 "max_latency_us": 1876.7127272727273 00:09:26.471 } 00:09:26.471 ], 00:09:26.471 "core_count": 1 00:09:26.471 } 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61360 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61360 ']' 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61360 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61360 00:09:26.471 killing process with pid 61360 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61360' 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61360 00:09:26.471 [2024-11-20 08:42:57.148897] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.471 08:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61360 00:09:26.471 [2024-11-20 08:42:57.270222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YWzzCRbnKh 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:27.479 00:09:27.479 real 0m4.475s 00:09:27.479 user 0m5.576s 00:09:27.479 sys 0m0.550s 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.479 08:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.479 ************************************ 00:09:27.479 END TEST raid_read_error_test 00:09:27.479 ************************************ 00:09:27.737 08:42:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:27.737 08:42:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.737 08:42:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.737 08:42:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.737 ************************************ 00:09:27.737 START TEST raid_write_error_test 00:09:27.737 ************************************ 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:27.737 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MU1SGEYeB5 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61505 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61505 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61505 ']' 00:09:27.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.738 08:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.738 [2024-11-20 08:42:58.535941] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:27.738 [2024-11-20 08:42:58.536104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61505 ] 00:09:27.996 [2024-11-20 08:42:58.712003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.996 [2024-11-20 08:42:58.840940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.255 [2024-11-20 08:42:59.046127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.255 [2024-11-20 08:42:59.046168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.823 BaseBdev1_malloc 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.823 true 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.823 [2024-11-20 08:42:59.601833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.823 [2024-11-20 08:42:59.601936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.823 [2024-11-20 08:42:59.601986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.823 [2024-11-20 08:42:59.602015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.823 [2024-11-20 08:42:59.606314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.823 [2024-11-20 08:42:59.606380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.823 BaseBdev1 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.823 BaseBdev2_malloc 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.823 true 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.823 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.824 [2024-11-20 08:42:59.673051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.824 [2024-11-20 08:42:59.673152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.824 [2024-11-20 08:42:59.673187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.824 [2024-11-20 08:42:59.673208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.824 [2024-11-20 08:42:59.676119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.824 [2024-11-20 08:42:59.676347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.824 BaseBdev2 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.824 [2024-11-20 08:42:59.685313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.824 [2024-11-20 08:42:59.687902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.824 [2024-11-20 08:42:59.688384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.824 [2024-11-20 08:42:59.688420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:28.824 [2024-11-20 08:42:59.688775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:28.824 [2024-11-20 08:42:59.689026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.824 [2024-11-20 08:42:59.689048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:28.824 [2024-11-20 08:42:59.689355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.824 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.083 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.083 "name": "raid_bdev1", 00:09:29.083 "uuid": "c4749464-7003-4163-b8ee-a9e5700345a1", 00:09:29.083 "strip_size_kb": 64, 00:09:29.083 "state": "online", 00:09:29.083 "raid_level": "raid0", 00:09:29.083 "superblock": true, 00:09:29.083 "num_base_bdevs": 2, 00:09:29.083 "num_base_bdevs_discovered": 2, 00:09:29.083 "num_base_bdevs_operational": 2, 00:09:29.083 "base_bdevs_list": [ 00:09:29.083 { 00:09:29.083 "name": "BaseBdev1", 00:09:29.083 "uuid": "c3183d26-a093-5b3e-be39-f32dcbe2bc2f", 00:09:29.083 "is_configured": true, 00:09:29.083 "data_offset": 2048, 00:09:29.083 "data_size": 63488 00:09:29.083 }, 00:09:29.083 { 00:09:29.083 "name": "BaseBdev2", 00:09:29.083 "uuid": "6ec34464-f128-5844-af2e-a14de40e576a", 00:09:29.083 "is_configured": true, 00:09:29.083 "data_offset": 2048, 00:09:29.083 "data_size": 63488 00:09:29.083 } 00:09:29.083 ] 00:09:29.083 }' 00:09:29.083 08:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.083 08:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.341 08:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:29.341 08:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:29.600 [2024-11-20 08:43:00.326921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.539 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.540 "name": "raid_bdev1", 00:09:30.540 "uuid": "c4749464-7003-4163-b8ee-a9e5700345a1", 00:09:30.540 "strip_size_kb": 64, 00:09:30.540 "state": "online", 00:09:30.540 "raid_level": "raid0", 00:09:30.540 "superblock": true, 00:09:30.540 "num_base_bdevs": 2, 00:09:30.540 "num_base_bdevs_discovered": 2, 00:09:30.540 "num_base_bdevs_operational": 2, 00:09:30.540 "base_bdevs_list": [ 00:09:30.540 { 00:09:30.540 "name": "BaseBdev1", 00:09:30.540 "uuid": "c3183d26-a093-5b3e-be39-f32dcbe2bc2f", 00:09:30.540 "is_configured": true, 00:09:30.540 "data_offset": 2048, 00:09:30.540 "data_size": 63488 00:09:30.540 }, 00:09:30.540 { 00:09:30.540 "name": "BaseBdev2", 00:09:30.540 "uuid": "6ec34464-f128-5844-af2e-a14de40e576a", 00:09:30.540 "is_configured": true, 00:09:30.540 "data_offset": 2048, 00:09:30.540 "data_size": 63488 00:09:30.540 } 00:09:30.540 ] 00:09:30.540 }' 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.540 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.108 [2024-11-20 08:43:01.750484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.108 [2024-11-20 08:43:01.750673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.108 [2024-11-20 08:43:01.754223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.108 [2024-11-20 08:43:01.754412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.108 [2024-11-20 08:43:01.754503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.108 [2024-11-20 08:43:01.754691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.108 { 00:09:31.108 "results": [ 00:09:31.108 { 00:09:31.108 "job": "raid_bdev1", 00:09:31.108 "core_mask": "0x1", 00:09:31.108 "workload": "randrw", 00:09:31.108 "percentage": 50, 00:09:31.108 "status": "finished", 00:09:31.108 "queue_depth": 1, 00:09:31.108 "io_size": 131072, 00:09:31.108 "runtime": 1.421127, 00:09:31.108 "iops": 10833.655260930234, 00:09:31.108 "mibps": 1354.2069076162793, 00:09:31.108 "io_failed": 1, 00:09:31.108 "io_timeout": 0, 00:09:31.108 "avg_latency_us": 128.47925463638137, 00:09:31.108 "min_latency_us": 42.123636363636365, 00:09:31.108 "max_latency_us": 1876.7127272727273 00:09:31.108 } 00:09:31.108 ], 00:09:31.108 "core_count": 1 00:09:31.108 } 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61505 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61505 ']' 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61505 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61505 00:09:31.108 killing process with pid 61505 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61505' 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61505 00:09:31.108 [2024-11-20 08:43:01.790695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.108 08:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61505 00:09:31.108 [2024-11-20 08:43:01.913415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.487 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MU1SGEYeB5 00:09:32.487 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:32.487 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:32.487 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:32.487 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:32.487 ************************************ 00:09:32.487 END TEST raid_write_error_test 00:09:32.488 ************************************ 00:09:32.488 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.488 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.488 08:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:32.488 00:09:32.488 real 0m4.582s 00:09:32.488 user 0m5.714s 00:09:32.488 sys 0m0.559s 00:09:32.488 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.488 08:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.488 08:43:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:32.488 08:43:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:32.488 08:43:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.488 08:43:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.488 08:43:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.488 ************************************ 00:09:32.488 START TEST raid_state_function_test 00:09:32.488 ************************************ 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:32.488 Process raid pid: 61643 00:09:32.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61643 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61643' 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61643 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61643 ']' 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.488 08:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.488 [2024-11-20 08:43:03.175435] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:32.488 [2024-11-20 08:43:03.175837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.488 [2024-11-20 08:43:03.365865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.746 [2024-11-20 08:43:03.518106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.005 [2024-11-20 08:43:03.727837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.005 [2024-11-20 08:43:03.728097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.264 [2024-11-20 08:43:04.160450] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.264 [2024-11-20 08:43:04.160695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.264 [2024-11-20 08:43:04.160835] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.264 [2024-11-20 08:43:04.160872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.264 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.265 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.524 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.524 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.524 "name": "Existed_Raid", 00:09:33.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.524 "strip_size_kb": 64, 00:09:33.524 "state": "configuring", 00:09:33.524 "raid_level": "concat", 00:09:33.524 "superblock": false, 00:09:33.524 "num_base_bdevs": 2, 00:09:33.524 "num_base_bdevs_discovered": 0, 00:09:33.524 "num_base_bdevs_operational": 2, 00:09:33.524 "base_bdevs_list": [ 00:09:33.524 { 00:09:33.524 "name": "BaseBdev1", 00:09:33.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.524 "is_configured": false, 00:09:33.524 "data_offset": 0, 00:09:33.524 "data_size": 0 00:09:33.524 }, 00:09:33.524 { 00:09:33.524 "name": "BaseBdev2", 00:09:33.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.524 "is_configured": false, 00:09:33.524 "data_offset": 0, 00:09:33.524 "data_size": 0 00:09:33.524 } 00:09:33.524 ] 00:09:33.524 }' 00:09:33.524 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.524 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.784 [2024-11-20 08:43:04.632526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.784 [2024-11-20 08:43:04.632570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.784 [2024-11-20 08:43:04.640485] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.784 [2024-11-20 08:43:04.640545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.784 [2024-11-20 08:43:04.640560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.784 [2024-11-20 08:43:04.640580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.784 [2024-11-20 08:43:04.685868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.784 BaseBdev1 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.784 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.043 [ 00:09:34.043 { 00:09:34.043 "name": "BaseBdev1", 00:09:34.043 "aliases": [ 00:09:34.043 "3aa9d8ac-b2be-4f04-95f2-2cbbb69ad9ac" 00:09:34.043 ], 00:09:34.043 "product_name": "Malloc disk", 00:09:34.043 "block_size": 512, 00:09:34.043 "num_blocks": 65536, 00:09:34.043 "uuid": "3aa9d8ac-b2be-4f04-95f2-2cbbb69ad9ac", 00:09:34.043 "assigned_rate_limits": { 00:09:34.043 "rw_ios_per_sec": 0, 00:09:34.043 "rw_mbytes_per_sec": 0, 00:09:34.043 "r_mbytes_per_sec": 0, 00:09:34.043 "w_mbytes_per_sec": 0 00:09:34.043 }, 00:09:34.043 "claimed": true, 00:09:34.043 "claim_type": "exclusive_write", 00:09:34.043 "zoned": false, 00:09:34.043 "supported_io_types": { 00:09:34.043 "read": true, 00:09:34.043 "write": true, 00:09:34.043 "unmap": true, 00:09:34.043 "flush": true, 00:09:34.043 "reset": true, 00:09:34.043 "nvme_admin": false, 00:09:34.043 "nvme_io": false, 00:09:34.043 "nvme_io_md": false, 00:09:34.043 "write_zeroes": true, 00:09:34.043 "zcopy": true, 00:09:34.043 "get_zone_info": false, 00:09:34.043 "zone_management": false, 00:09:34.043 "zone_append": false, 00:09:34.043 "compare": false, 00:09:34.043 "compare_and_write": false, 00:09:34.043 "abort": true, 00:09:34.043 "seek_hole": false, 00:09:34.043 "seek_data": false, 00:09:34.043 "copy": true, 00:09:34.043 "nvme_iov_md": false 00:09:34.043 }, 00:09:34.043 "memory_domains": [ 00:09:34.043 { 00:09:34.043 "dma_device_id": "system", 00:09:34.043 "dma_device_type": 1 00:09:34.043 }, 00:09:34.043 { 00:09:34.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.043 "dma_device_type": 2 00:09:34.043 } 00:09:34.043 ], 00:09:34.043 "driver_specific": {} 00:09:34.043 } 00:09:34.043 ] 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.043 "name": "Existed_Raid", 00:09:34.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.043 "strip_size_kb": 64, 00:09:34.043 "state": "configuring", 00:09:34.043 "raid_level": "concat", 00:09:34.043 "superblock": false, 00:09:34.043 "num_base_bdevs": 2, 00:09:34.043 "num_base_bdevs_discovered": 1, 00:09:34.043 "num_base_bdevs_operational": 2, 00:09:34.043 "base_bdevs_list": [ 00:09:34.043 { 00:09:34.043 "name": "BaseBdev1", 00:09:34.043 "uuid": "3aa9d8ac-b2be-4f04-95f2-2cbbb69ad9ac", 00:09:34.043 "is_configured": true, 00:09:34.043 "data_offset": 0, 00:09:34.043 "data_size": 65536 00:09:34.043 }, 00:09:34.043 { 00:09:34.043 "name": "BaseBdev2", 00:09:34.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.043 "is_configured": false, 00:09:34.043 "data_offset": 0, 00:09:34.043 "data_size": 0 00:09:34.043 } 00:09:34.043 ] 00:09:34.043 }' 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.043 08:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.302 [2024-11-20 08:43:05.206076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.302 [2024-11-20 08:43:05.206319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.302 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.302 [2024-11-20 08:43:05.214124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.610 [2024-11-20 08:43:05.216716] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.610 [2024-11-20 08:43:05.216776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.610 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.610 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.610 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.610 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.611 "name": "Existed_Raid", 00:09:34.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.611 "strip_size_kb": 64, 00:09:34.611 "state": "configuring", 00:09:34.611 "raid_level": "concat", 00:09:34.611 "superblock": false, 00:09:34.611 "num_base_bdevs": 2, 00:09:34.611 "num_base_bdevs_discovered": 1, 00:09:34.611 "num_base_bdevs_operational": 2, 00:09:34.611 "base_bdevs_list": [ 00:09:34.611 { 00:09:34.611 "name": "BaseBdev1", 00:09:34.611 "uuid": "3aa9d8ac-b2be-4f04-95f2-2cbbb69ad9ac", 00:09:34.611 "is_configured": true, 00:09:34.611 "data_offset": 0, 00:09:34.611 "data_size": 65536 00:09:34.611 }, 00:09:34.611 { 00:09:34.611 "name": "BaseBdev2", 00:09:34.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.611 "is_configured": false, 00:09:34.611 "data_offset": 0, 00:09:34.611 "data_size": 0 00:09:34.611 } 00:09:34.611 ] 00:09:34.611 }' 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.611 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 [2024-11-20 08:43:05.776996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.910 [2024-11-20 08:43:05.777064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.910 [2024-11-20 08:43:05.777078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:34.910 [2024-11-20 08:43:05.777463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:34.910 [2024-11-20 08:43:05.777676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.910 [2024-11-20 08:43:05.777701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.910 [2024-11-20 08:43:05.778018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.910 BaseBdev2 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.910 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.911 [ 00:09:34.911 { 00:09:34.911 "name": "BaseBdev2", 00:09:34.911 "aliases": [ 00:09:34.911 "e82b4dea-b0de-4303-b4ac-7ecf86bdbb96" 00:09:34.911 ], 00:09:34.911 "product_name": "Malloc disk", 00:09:34.911 "block_size": 512, 00:09:34.911 "num_blocks": 65536, 00:09:34.911 "uuid": "e82b4dea-b0de-4303-b4ac-7ecf86bdbb96", 00:09:34.911 "assigned_rate_limits": { 00:09:34.911 "rw_ios_per_sec": 0, 00:09:34.911 "rw_mbytes_per_sec": 0, 00:09:34.911 "r_mbytes_per_sec": 0, 00:09:34.911 "w_mbytes_per_sec": 0 00:09:34.911 }, 00:09:34.911 "claimed": true, 00:09:34.911 "claim_type": "exclusive_write", 00:09:34.911 "zoned": false, 00:09:34.911 "supported_io_types": { 00:09:34.911 "read": true, 00:09:34.911 "write": true, 00:09:34.911 "unmap": true, 00:09:34.911 "flush": true, 00:09:34.911 "reset": true, 00:09:34.911 "nvme_admin": false, 00:09:34.911 "nvme_io": false, 00:09:34.911 "nvme_io_md": false, 00:09:34.911 "write_zeroes": true, 00:09:34.911 "zcopy": true, 00:09:34.911 "get_zone_info": false, 00:09:34.911 "zone_management": false, 00:09:34.911 "zone_append": false, 00:09:34.911 "compare": false, 00:09:34.911 "compare_and_write": false, 00:09:34.911 "abort": true, 00:09:34.911 "seek_hole": false, 00:09:34.911 "seek_data": false, 00:09:34.911 "copy": true, 00:09:34.911 "nvme_iov_md": false 00:09:34.911 }, 00:09:34.911 "memory_domains": [ 00:09:34.911 { 00:09:34.911 "dma_device_id": "system", 00:09:34.911 "dma_device_type": 1 00:09:34.911 }, 00:09:34.911 { 00:09:34.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.911 "dma_device_type": 2 00:09:34.911 } 00:09:34.911 ], 00:09:34.911 "driver_specific": {} 00:09:34.911 } 00:09:34.911 ] 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.911 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.170 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.170 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.170 "name": "Existed_Raid", 00:09:35.170 "uuid": "5bea3dee-df89-41e3-af44-f573677c2fc0", 00:09:35.170 "strip_size_kb": 64, 00:09:35.170 "state": "online", 00:09:35.170 "raid_level": "concat", 00:09:35.170 "superblock": false, 00:09:35.170 "num_base_bdevs": 2, 00:09:35.170 "num_base_bdevs_discovered": 2, 00:09:35.170 "num_base_bdevs_operational": 2, 00:09:35.170 "base_bdevs_list": [ 00:09:35.170 { 00:09:35.170 "name": "BaseBdev1", 00:09:35.170 "uuid": "3aa9d8ac-b2be-4f04-95f2-2cbbb69ad9ac", 00:09:35.170 "is_configured": true, 00:09:35.170 "data_offset": 0, 00:09:35.170 "data_size": 65536 00:09:35.170 }, 00:09:35.170 { 00:09:35.170 "name": "BaseBdev2", 00:09:35.170 "uuid": "e82b4dea-b0de-4303-b4ac-7ecf86bdbb96", 00:09:35.170 "is_configured": true, 00:09:35.170 "data_offset": 0, 00:09:35.170 "data_size": 65536 00:09:35.170 } 00:09:35.170 ] 00:09:35.170 }' 00:09:35.170 08:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.170 08:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.428 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.687 [2024-11-20 08:43:06.345546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.687 "name": "Existed_Raid", 00:09:35.687 "aliases": [ 00:09:35.687 "5bea3dee-df89-41e3-af44-f573677c2fc0" 00:09:35.687 ], 00:09:35.687 "product_name": "Raid Volume", 00:09:35.687 "block_size": 512, 00:09:35.687 "num_blocks": 131072, 00:09:35.687 "uuid": "5bea3dee-df89-41e3-af44-f573677c2fc0", 00:09:35.687 "assigned_rate_limits": { 00:09:35.687 "rw_ios_per_sec": 0, 00:09:35.687 "rw_mbytes_per_sec": 0, 00:09:35.687 "r_mbytes_per_sec": 0, 00:09:35.687 "w_mbytes_per_sec": 0 00:09:35.687 }, 00:09:35.687 "claimed": false, 00:09:35.687 "zoned": false, 00:09:35.687 "supported_io_types": { 00:09:35.687 "read": true, 00:09:35.687 "write": true, 00:09:35.687 "unmap": true, 00:09:35.687 "flush": true, 00:09:35.687 "reset": true, 00:09:35.687 "nvme_admin": false, 00:09:35.687 "nvme_io": false, 00:09:35.687 "nvme_io_md": false, 00:09:35.687 "write_zeroes": true, 00:09:35.687 "zcopy": false, 00:09:35.687 "get_zone_info": false, 00:09:35.687 "zone_management": false, 00:09:35.687 "zone_append": false, 00:09:35.687 "compare": false, 00:09:35.687 "compare_and_write": false, 00:09:35.687 "abort": false, 00:09:35.687 "seek_hole": false, 00:09:35.687 "seek_data": false, 00:09:35.687 "copy": false, 00:09:35.687 "nvme_iov_md": false 00:09:35.687 }, 00:09:35.687 "memory_domains": [ 00:09:35.687 { 00:09:35.687 "dma_device_id": "system", 00:09:35.687 "dma_device_type": 1 00:09:35.687 }, 00:09:35.687 { 00:09:35.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.687 "dma_device_type": 2 00:09:35.687 }, 00:09:35.687 { 00:09:35.687 "dma_device_id": "system", 00:09:35.687 "dma_device_type": 1 00:09:35.687 }, 00:09:35.687 { 00:09:35.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.687 "dma_device_type": 2 00:09:35.687 } 00:09:35.687 ], 00:09:35.687 "driver_specific": { 00:09:35.687 "raid": { 00:09:35.687 "uuid": "5bea3dee-df89-41e3-af44-f573677c2fc0", 00:09:35.687 "strip_size_kb": 64, 00:09:35.687 "state": "online", 00:09:35.687 "raid_level": "concat", 00:09:35.687 "superblock": false, 00:09:35.687 "num_base_bdevs": 2, 00:09:35.687 "num_base_bdevs_discovered": 2, 00:09:35.687 "num_base_bdevs_operational": 2, 00:09:35.687 "base_bdevs_list": [ 00:09:35.687 { 00:09:35.687 "name": "BaseBdev1", 00:09:35.687 "uuid": "3aa9d8ac-b2be-4f04-95f2-2cbbb69ad9ac", 00:09:35.687 "is_configured": true, 00:09:35.687 "data_offset": 0, 00:09:35.687 "data_size": 65536 00:09:35.687 }, 00:09:35.687 { 00:09:35.687 "name": "BaseBdev2", 00:09:35.687 "uuid": "e82b4dea-b0de-4303-b4ac-7ecf86bdbb96", 00:09:35.687 "is_configured": true, 00:09:35.687 "data_offset": 0, 00:09:35.687 "data_size": 65536 00:09:35.687 } 00:09:35.687 ] 00:09:35.687 } 00:09:35.687 } 00:09:35.687 }' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.687 BaseBdev2' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.687 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.688 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.688 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.688 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.947 [2024-11-20 08:43:06.617356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.947 [2024-11-20 08:43:06.617403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.947 [2024-11-20 08:43:06.617486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.947 "name": "Existed_Raid", 00:09:35.947 "uuid": "5bea3dee-df89-41e3-af44-f573677c2fc0", 00:09:35.947 "strip_size_kb": 64, 00:09:35.947 "state": "offline", 00:09:35.947 "raid_level": "concat", 00:09:35.947 "superblock": false, 00:09:35.947 "num_base_bdevs": 2, 00:09:35.947 "num_base_bdevs_discovered": 1, 00:09:35.947 "num_base_bdevs_operational": 1, 00:09:35.947 "base_bdevs_list": [ 00:09:35.947 { 00:09:35.947 "name": null, 00:09:35.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.947 "is_configured": false, 00:09:35.947 "data_offset": 0, 00:09:35.947 "data_size": 65536 00:09:35.947 }, 00:09:35.947 { 00:09:35.947 "name": "BaseBdev2", 00:09:35.947 "uuid": "e82b4dea-b0de-4303-b4ac-7ecf86bdbb96", 00:09:35.947 "is_configured": true, 00:09:35.947 "data_offset": 0, 00:09:35.947 "data_size": 65536 00:09:35.947 } 00:09:35.947 ] 00:09:35.947 }' 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.947 08:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.515 [2024-11-20 08:43:07.268495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.515 [2024-11-20 08:43:07.268596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61643 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61643 ']' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61643 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.515 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61643 00:09:36.774 killing process with pid 61643 00:09:36.774 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.774 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.774 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61643' 00:09:36.774 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61643 00:09:36.774 [2024-11-20 08:43:07.441087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.774 08:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61643 00:09:36.774 [2024-11-20 08:43:07.455975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.712 ************************************ 00:09:37.712 END TEST raid_state_function_test 00:09:37.712 ************************************ 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.712 00:09:37.712 real 0m5.427s 00:09:37.712 user 0m8.206s 00:09:37.712 sys 0m0.749s 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.712 08:43:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:37.712 08:43:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.712 08:43:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.712 08:43:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.712 ************************************ 00:09:37.712 START TEST raid_state_function_test_sb 00:09:37.712 ************************************ 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:37.712 Process raid pid: 61902 00:09:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61902 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61902' 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61902 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61902 ']' 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.712 08:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.971 [2024-11-20 08:43:08.659827] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:37.971 [2024-11-20 08:43:08.660243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.971 [2024-11-20 08:43:08.852236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.230 [2024-11-20 08:43:09.010973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.489 [2024-11-20 08:43:09.230048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.489 [2024-11-20 08:43:09.230113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.747 [2024-11-20 08:43:09.645259] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.747 [2024-11-20 08:43:09.645325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.747 [2024-11-20 08:43:09.645344] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.747 [2024-11-20 08:43:09.645362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.747 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.006 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.006 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.006 "name": "Existed_Raid", 00:09:39.006 "uuid": "9e58662e-5c6f-4069-b1b6-928c6fdc9fe5", 00:09:39.006 "strip_size_kb": 64, 00:09:39.006 "state": "configuring", 00:09:39.006 "raid_level": "concat", 00:09:39.006 "superblock": true, 00:09:39.006 "num_base_bdevs": 2, 00:09:39.006 "num_base_bdevs_discovered": 0, 00:09:39.006 "num_base_bdevs_operational": 2, 00:09:39.006 "base_bdevs_list": [ 00:09:39.006 { 00:09:39.006 "name": "BaseBdev1", 00:09:39.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.006 "is_configured": false, 00:09:39.006 "data_offset": 0, 00:09:39.006 "data_size": 0 00:09:39.006 }, 00:09:39.006 { 00:09:39.006 "name": "BaseBdev2", 00:09:39.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.006 "is_configured": false, 00:09:39.006 "data_offset": 0, 00:09:39.006 "data_size": 0 00:09:39.006 } 00:09:39.006 ] 00:09:39.006 }' 00:09:39.006 08:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.006 08:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.573 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.573 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.574 [2024-11-20 08:43:10.189407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.574 [2024-11-20 08:43:10.189451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.574 [2024-11-20 08:43:10.201381] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.574 [2024-11-20 08:43:10.201455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.574 [2024-11-20 08:43:10.201471] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.574 [2024-11-20 08:43:10.201491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.574 [2024-11-20 08:43:10.246712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.574 BaseBdev1 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.574 [ 00:09:39.574 { 00:09:39.574 "name": "BaseBdev1", 00:09:39.574 "aliases": [ 00:09:39.574 "361694df-4096-49f7-aa66-405d07844667" 00:09:39.574 ], 00:09:39.574 "product_name": "Malloc disk", 00:09:39.574 "block_size": 512, 00:09:39.574 "num_blocks": 65536, 00:09:39.574 "uuid": "361694df-4096-49f7-aa66-405d07844667", 00:09:39.574 "assigned_rate_limits": { 00:09:39.574 "rw_ios_per_sec": 0, 00:09:39.574 "rw_mbytes_per_sec": 0, 00:09:39.574 "r_mbytes_per_sec": 0, 00:09:39.574 "w_mbytes_per_sec": 0 00:09:39.574 }, 00:09:39.574 "claimed": true, 00:09:39.574 "claim_type": "exclusive_write", 00:09:39.574 "zoned": false, 00:09:39.574 "supported_io_types": { 00:09:39.574 "read": true, 00:09:39.574 "write": true, 00:09:39.574 "unmap": true, 00:09:39.574 "flush": true, 00:09:39.574 "reset": true, 00:09:39.574 "nvme_admin": false, 00:09:39.574 "nvme_io": false, 00:09:39.574 "nvme_io_md": false, 00:09:39.574 "write_zeroes": true, 00:09:39.574 "zcopy": true, 00:09:39.574 "get_zone_info": false, 00:09:39.574 "zone_management": false, 00:09:39.574 "zone_append": false, 00:09:39.574 "compare": false, 00:09:39.574 "compare_and_write": false, 00:09:39.574 "abort": true, 00:09:39.574 "seek_hole": false, 00:09:39.574 "seek_data": false, 00:09:39.574 "copy": true, 00:09:39.574 "nvme_iov_md": false 00:09:39.574 }, 00:09:39.574 "memory_domains": [ 00:09:39.574 { 00:09:39.574 "dma_device_id": "system", 00:09:39.574 "dma_device_type": 1 00:09:39.574 }, 00:09:39.574 { 00:09:39.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.574 "dma_device_type": 2 00:09:39.574 } 00:09:39.574 ], 00:09:39.574 "driver_specific": {} 00:09:39.574 } 00:09:39.574 ] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.574 "name": "Existed_Raid", 00:09:39.574 "uuid": "ef47ec90-d36a-47b4-ae42-fec2954e7d35", 00:09:39.574 "strip_size_kb": 64, 00:09:39.574 "state": "configuring", 00:09:39.574 "raid_level": "concat", 00:09:39.574 "superblock": true, 00:09:39.574 "num_base_bdevs": 2, 00:09:39.574 "num_base_bdevs_discovered": 1, 00:09:39.574 "num_base_bdevs_operational": 2, 00:09:39.574 "base_bdevs_list": [ 00:09:39.574 { 00:09:39.574 "name": "BaseBdev1", 00:09:39.574 "uuid": "361694df-4096-49f7-aa66-405d07844667", 00:09:39.574 "is_configured": true, 00:09:39.574 "data_offset": 2048, 00:09:39.574 "data_size": 63488 00:09:39.574 }, 00:09:39.574 { 00:09:39.574 "name": "BaseBdev2", 00:09:39.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.574 "is_configured": false, 00:09:39.574 "data_offset": 0, 00:09:39.574 "data_size": 0 00:09:39.574 } 00:09:39.574 ] 00:09:39.574 }' 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.574 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.175 [2024-11-20 08:43:10.830973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.175 [2024-11-20 08:43:10.831035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.175 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.175 [2024-11-20 08:43:10.839016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.176 [2024-11-20 08:43:10.841480] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.176 [2024-11-20 08:43:10.841544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.176 "name": "Existed_Raid", 00:09:40.176 "uuid": "5c14bd8d-b0df-4f6a-847e-50ef0ce86fe4", 00:09:40.176 "strip_size_kb": 64, 00:09:40.176 "state": "configuring", 00:09:40.176 "raid_level": "concat", 00:09:40.176 "superblock": true, 00:09:40.176 "num_base_bdevs": 2, 00:09:40.176 "num_base_bdevs_discovered": 1, 00:09:40.176 "num_base_bdevs_operational": 2, 00:09:40.176 "base_bdevs_list": [ 00:09:40.176 { 00:09:40.176 "name": "BaseBdev1", 00:09:40.176 "uuid": "361694df-4096-49f7-aa66-405d07844667", 00:09:40.176 "is_configured": true, 00:09:40.176 "data_offset": 2048, 00:09:40.176 "data_size": 63488 00:09:40.176 }, 00:09:40.176 { 00:09:40.176 "name": "BaseBdev2", 00:09:40.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.176 "is_configured": false, 00:09:40.176 "data_offset": 0, 00:09:40.176 "data_size": 0 00:09:40.176 } 00:09:40.176 ] 00:09:40.176 }' 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.176 08:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.742 [2024-11-20 08:43:11.396643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.742 [2024-11-20 08:43:11.396965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.742 [2024-11-20 08:43:11.396984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:40.742 [2024-11-20 08:43:11.397385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:40.742 BaseBdev2 00:09:40.742 [2024-11-20 08:43:11.397576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.742 [2024-11-20 08:43:11.397607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:40.742 [2024-11-20 08:43:11.397789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.742 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.742 [ 00:09:40.742 { 00:09:40.742 "name": "BaseBdev2", 00:09:40.742 "aliases": [ 00:09:40.742 "de7a1217-f985-4bc0-ba26-ca4a61cfd5a8" 00:09:40.742 ], 00:09:40.742 "product_name": "Malloc disk", 00:09:40.742 "block_size": 512, 00:09:40.742 "num_blocks": 65536, 00:09:40.742 "uuid": "de7a1217-f985-4bc0-ba26-ca4a61cfd5a8", 00:09:40.742 "assigned_rate_limits": { 00:09:40.742 "rw_ios_per_sec": 0, 00:09:40.742 "rw_mbytes_per_sec": 0, 00:09:40.742 "r_mbytes_per_sec": 0, 00:09:40.742 "w_mbytes_per_sec": 0 00:09:40.742 }, 00:09:40.742 "claimed": true, 00:09:40.742 "claim_type": "exclusive_write", 00:09:40.742 "zoned": false, 00:09:40.742 "supported_io_types": { 00:09:40.742 "read": true, 00:09:40.742 "write": true, 00:09:40.742 "unmap": true, 00:09:40.742 "flush": true, 00:09:40.742 "reset": true, 00:09:40.743 "nvme_admin": false, 00:09:40.743 "nvme_io": false, 00:09:40.743 "nvme_io_md": false, 00:09:40.743 "write_zeroes": true, 00:09:40.743 "zcopy": true, 00:09:40.743 "get_zone_info": false, 00:09:40.743 "zone_management": false, 00:09:40.743 "zone_append": false, 00:09:40.743 "compare": false, 00:09:40.743 "compare_and_write": false, 00:09:40.743 "abort": true, 00:09:40.743 "seek_hole": false, 00:09:40.743 "seek_data": false, 00:09:40.743 "copy": true, 00:09:40.743 "nvme_iov_md": false 00:09:40.743 }, 00:09:40.743 "memory_domains": [ 00:09:40.743 { 00:09:40.743 "dma_device_id": "system", 00:09:40.743 "dma_device_type": 1 00:09:40.743 }, 00:09:40.743 { 00:09:40.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.743 "dma_device_type": 2 00:09:40.743 } 00:09:40.743 ], 00:09:40.743 "driver_specific": {} 00:09:40.743 } 00:09:40.743 ] 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.743 "name": "Existed_Raid", 00:09:40.743 "uuid": "5c14bd8d-b0df-4f6a-847e-50ef0ce86fe4", 00:09:40.743 "strip_size_kb": 64, 00:09:40.743 "state": "online", 00:09:40.743 "raid_level": "concat", 00:09:40.743 "superblock": true, 00:09:40.743 "num_base_bdevs": 2, 00:09:40.743 "num_base_bdevs_discovered": 2, 00:09:40.743 "num_base_bdevs_operational": 2, 00:09:40.743 "base_bdevs_list": [ 00:09:40.743 { 00:09:40.743 "name": "BaseBdev1", 00:09:40.743 "uuid": "361694df-4096-49f7-aa66-405d07844667", 00:09:40.743 "is_configured": true, 00:09:40.743 "data_offset": 2048, 00:09:40.743 "data_size": 63488 00:09:40.743 }, 00:09:40.743 { 00:09:40.743 "name": "BaseBdev2", 00:09:40.743 "uuid": "de7a1217-f985-4bc0-ba26-ca4a61cfd5a8", 00:09:40.743 "is_configured": true, 00:09:40.743 "data_offset": 2048, 00:09:40.743 "data_size": 63488 00:09:40.743 } 00:09:40.743 ] 00:09:40.743 }' 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.743 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.310 [2024-11-20 08:43:11.965185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.310 08:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.310 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.310 "name": "Existed_Raid", 00:09:41.310 "aliases": [ 00:09:41.310 "5c14bd8d-b0df-4f6a-847e-50ef0ce86fe4" 00:09:41.310 ], 00:09:41.310 "product_name": "Raid Volume", 00:09:41.310 "block_size": 512, 00:09:41.310 "num_blocks": 126976, 00:09:41.310 "uuid": "5c14bd8d-b0df-4f6a-847e-50ef0ce86fe4", 00:09:41.310 "assigned_rate_limits": { 00:09:41.310 "rw_ios_per_sec": 0, 00:09:41.310 "rw_mbytes_per_sec": 0, 00:09:41.310 "r_mbytes_per_sec": 0, 00:09:41.310 "w_mbytes_per_sec": 0 00:09:41.310 }, 00:09:41.310 "claimed": false, 00:09:41.310 "zoned": false, 00:09:41.310 "supported_io_types": { 00:09:41.310 "read": true, 00:09:41.310 "write": true, 00:09:41.310 "unmap": true, 00:09:41.310 "flush": true, 00:09:41.310 "reset": true, 00:09:41.310 "nvme_admin": false, 00:09:41.310 "nvme_io": false, 00:09:41.310 "nvme_io_md": false, 00:09:41.310 "write_zeroes": true, 00:09:41.310 "zcopy": false, 00:09:41.310 "get_zone_info": false, 00:09:41.310 "zone_management": false, 00:09:41.310 "zone_append": false, 00:09:41.310 "compare": false, 00:09:41.310 "compare_and_write": false, 00:09:41.310 "abort": false, 00:09:41.310 "seek_hole": false, 00:09:41.310 "seek_data": false, 00:09:41.310 "copy": false, 00:09:41.310 "nvme_iov_md": false 00:09:41.310 }, 00:09:41.310 "memory_domains": [ 00:09:41.310 { 00:09:41.310 "dma_device_id": "system", 00:09:41.310 "dma_device_type": 1 00:09:41.310 }, 00:09:41.310 { 00:09:41.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.310 "dma_device_type": 2 00:09:41.310 }, 00:09:41.310 { 00:09:41.310 "dma_device_id": "system", 00:09:41.310 "dma_device_type": 1 00:09:41.310 }, 00:09:41.310 { 00:09:41.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.310 "dma_device_type": 2 00:09:41.310 } 00:09:41.310 ], 00:09:41.310 "driver_specific": { 00:09:41.310 "raid": { 00:09:41.310 "uuid": "5c14bd8d-b0df-4f6a-847e-50ef0ce86fe4", 00:09:41.310 "strip_size_kb": 64, 00:09:41.310 "state": "online", 00:09:41.310 "raid_level": "concat", 00:09:41.310 "superblock": true, 00:09:41.310 "num_base_bdevs": 2, 00:09:41.310 "num_base_bdevs_discovered": 2, 00:09:41.310 "num_base_bdevs_operational": 2, 00:09:41.310 "base_bdevs_list": [ 00:09:41.310 { 00:09:41.310 "name": "BaseBdev1", 00:09:41.310 "uuid": "361694df-4096-49f7-aa66-405d07844667", 00:09:41.310 "is_configured": true, 00:09:41.310 "data_offset": 2048, 00:09:41.310 "data_size": 63488 00:09:41.310 }, 00:09:41.310 { 00:09:41.310 "name": "BaseBdev2", 00:09:41.310 "uuid": "de7a1217-f985-4bc0-ba26-ca4a61cfd5a8", 00:09:41.310 "is_configured": true, 00:09:41.311 "data_offset": 2048, 00:09:41.311 "data_size": 63488 00:09:41.311 } 00:09:41.311 ] 00:09:41.311 } 00:09:41.311 } 00:09:41.311 }' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.311 BaseBdev2' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.311 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.570 [2024-11-20 08:43:12.228952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.570 [2024-11-20 08:43:12.228993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.570 [2024-11-20 08:43:12.229056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.570 "name": "Existed_Raid", 00:09:41.570 "uuid": "5c14bd8d-b0df-4f6a-847e-50ef0ce86fe4", 00:09:41.570 "strip_size_kb": 64, 00:09:41.570 "state": "offline", 00:09:41.570 "raid_level": "concat", 00:09:41.570 "superblock": true, 00:09:41.570 "num_base_bdevs": 2, 00:09:41.570 "num_base_bdevs_discovered": 1, 00:09:41.570 "num_base_bdevs_operational": 1, 00:09:41.570 "base_bdevs_list": [ 00:09:41.570 { 00:09:41.570 "name": null, 00:09:41.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.570 "is_configured": false, 00:09:41.570 "data_offset": 0, 00:09:41.570 "data_size": 63488 00:09:41.570 }, 00:09:41.570 { 00:09:41.570 "name": "BaseBdev2", 00:09:41.570 "uuid": "de7a1217-f985-4bc0-ba26-ca4a61cfd5a8", 00:09:41.570 "is_configured": true, 00:09:41.570 "data_offset": 2048, 00:09:41.570 "data_size": 63488 00:09:41.570 } 00:09:41.570 ] 00:09:41.570 }' 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.570 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.138 [2024-11-20 08:43:12.890589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.138 [2024-11-20 08:43:12.890655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.138 08:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61902 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61902 ']' 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61902 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.138 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61902 00:09:42.397 killing process with pid 61902 00:09:42.397 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.397 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.397 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61902' 00:09:42.397 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61902 00:09:42.397 08:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61902 00:09:42.397 [2024-11-20 08:43:13.069807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.397 [2024-11-20 08:43:13.085606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.334 ************************************ 00:09:43.334 END TEST raid_state_function_test_sb 00:09:43.334 ************************************ 00:09:43.334 08:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.334 00:09:43.334 real 0m5.577s 00:09:43.334 user 0m8.426s 00:09:43.334 sys 0m0.812s 00:09:43.334 08:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.334 08:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.334 08:43:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:43.334 08:43:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.334 08:43:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.334 08:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.334 ************************************ 00:09:43.334 START TEST raid_superblock_test 00:09:43.334 ************************************ 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62159 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62159 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62159 ']' 00:09:43.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.334 08:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 [2024-11-20 08:43:14.289168] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:43.594 [2024-11-20 08:43:14.289367] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62159 ] 00:09:43.594 [2024-11-20 08:43:14.478274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.852 [2024-11-20 08:43:14.636344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.111 [2024-11-20 08:43:14.859396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.111 [2024-11-20 08:43:14.859468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.405 malloc1 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.405 [2024-11-20 08:43:15.256661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.405 [2024-11-20 08:43:15.256744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.405 [2024-11-20 08:43:15.256780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:44.405 [2024-11-20 08:43:15.256800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.405 [2024-11-20 08:43:15.259805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.405 [2024-11-20 08:43:15.259856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.405 pt1 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.405 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.673 malloc2 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.673 [2024-11-20 08:43:15.308616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.673 [2024-11-20 08:43:15.308690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.673 [2024-11-20 08:43:15.308722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:44.673 [2024-11-20 08:43:15.308737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.673 [2024-11-20 08:43:15.311502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.673 [2024-11-20 08:43:15.311688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.673 pt2 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.673 [2024-11-20 08:43:15.320712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.673 [2024-11-20 08:43:15.323105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.673 [2024-11-20 08:43:15.323473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:44.673 [2024-11-20 08:43:15.323512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:44.673 [2024-11-20 08:43:15.323819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:44.673 [2024-11-20 08:43:15.324017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:44.673 [2024-11-20 08:43:15.324040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:44.673 [2024-11-20 08:43:15.324235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.673 "name": "raid_bdev1", 00:09:44.673 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:44.673 "strip_size_kb": 64, 00:09:44.673 "state": "online", 00:09:44.673 "raid_level": "concat", 00:09:44.673 "superblock": true, 00:09:44.673 "num_base_bdevs": 2, 00:09:44.673 "num_base_bdevs_discovered": 2, 00:09:44.673 "num_base_bdevs_operational": 2, 00:09:44.673 "base_bdevs_list": [ 00:09:44.673 { 00:09:44.673 "name": "pt1", 00:09:44.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.673 "is_configured": true, 00:09:44.673 "data_offset": 2048, 00:09:44.673 "data_size": 63488 00:09:44.673 }, 00:09:44.673 { 00:09:44.673 "name": "pt2", 00:09:44.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.673 "is_configured": true, 00:09:44.673 "data_offset": 2048, 00:09:44.673 "data_size": 63488 00:09:44.673 } 00:09:44.673 ] 00:09:44.673 }' 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.673 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.932 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.932 [2024-11-20 08:43:15.841157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.191 "name": "raid_bdev1", 00:09:45.191 "aliases": [ 00:09:45.191 "609aad3a-0586-4a2f-9b8e-e86d246405f5" 00:09:45.191 ], 00:09:45.191 "product_name": "Raid Volume", 00:09:45.191 "block_size": 512, 00:09:45.191 "num_blocks": 126976, 00:09:45.191 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:45.191 "assigned_rate_limits": { 00:09:45.191 "rw_ios_per_sec": 0, 00:09:45.191 "rw_mbytes_per_sec": 0, 00:09:45.191 "r_mbytes_per_sec": 0, 00:09:45.191 "w_mbytes_per_sec": 0 00:09:45.191 }, 00:09:45.191 "claimed": false, 00:09:45.191 "zoned": false, 00:09:45.191 "supported_io_types": { 00:09:45.191 "read": true, 00:09:45.191 "write": true, 00:09:45.191 "unmap": true, 00:09:45.191 "flush": true, 00:09:45.191 "reset": true, 00:09:45.191 "nvme_admin": false, 00:09:45.191 "nvme_io": false, 00:09:45.191 "nvme_io_md": false, 00:09:45.191 "write_zeroes": true, 00:09:45.191 "zcopy": false, 00:09:45.191 "get_zone_info": false, 00:09:45.191 "zone_management": false, 00:09:45.191 "zone_append": false, 00:09:45.191 "compare": false, 00:09:45.191 "compare_and_write": false, 00:09:45.191 "abort": false, 00:09:45.191 "seek_hole": false, 00:09:45.191 "seek_data": false, 00:09:45.191 "copy": false, 00:09:45.191 "nvme_iov_md": false 00:09:45.191 }, 00:09:45.191 "memory_domains": [ 00:09:45.191 { 00:09:45.191 "dma_device_id": "system", 00:09:45.191 "dma_device_type": 1 00:09:45.191 }, 00:09:45.191 { 00:09:45.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.191 "dma_device_type": 2 00:09:45.191 }, 00:09:45.191 { 00:09:45.191 "dma_device_id": "system", 00:09:45.191 "dma_device_type": 1 00:09:45.191 }, 00:09:45.191 { 00:09:45.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.191 "dma_device_type": 2 00:09:45.191 } 00:09:45.191 ], 00:09:45.191 "driver_specific": { 00:09:45.191 "raid": { 00:09:45.191 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:45.191 "strip_size_kb": 64, 00:09:45.191 "state": "online", 00:09:45.191 "raid_level": "concat", 00:09:45.191 "superblock": true, 00:09:45.191 "num_base_bdevs": 2, 00:09:45.191 "num_base_bdevs_discovered": 2, 00:09:45.191 "num_base_bdevs_operational": 2, 00:09:45.191 "base_bdevs_list": [ 00:09:45.191 { 00:09:45.191 "name": "pt1", 00:09:45.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.191 "is_configured": true, 00:09:45.191 "data_offset": 2048, 00:09:45.191 "data_size": 63488 00:09:45.191 }, 00:09:45.191 { 00:09:45.191 "name": "pt2", 00:09:45.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.191 "is_configured": true, 00:09:45.191 "data_offset": 2048, 00:09:45.191 "data_size": 63488 00:09:45.191 } 00:09:45.191 ] 00:09:45.191 } 00:09:45.191 } 00:09:45.191 }' 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.191 pt2' 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.191 08:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.192 [2024-11-20 08:43:16.081218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.192 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.451 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=609aad3a-0586-4a2f-9b8e-e86d246405f5 00:09:45.451 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 609aad3a-0586-4a2f-9b8e-e86d246405f5 ']' 00:09:45.451 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.451 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.451 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.451 [2024-11-20 08:43:16.132855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.451 [2024-11-20 08:43:16.132889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.452 [2024-11-20 08:43:16.133021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.452 [2024-11-20 08:43:16.133088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.452 [2024-11-20 08:43:16.133111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 [2024-11-20 08:43:16.256892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.452 [2024-11-20 08:43:16.259775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:45.452 [2024-11-20 08:43:16.260054] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:45.452 [2024-11-20 08:43:16.260285] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:45.452 [2024-11-20 08:43:16.260475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.452 [2024-11-20 08:43:16.260596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:45.452 request: 00:09:45.452 { 00:09:45.452 "name": "raid_bdev1", 00:09:45.452 "raid_level": "concat", 00:09:45.452 "base_bdevs": [ 00:09:45.452 "malloc1", 00:09:45.452 "malloc2" 00:09:45.452 ], 00:09:45.452 "strip_size_kb": 64, 00:09:45.452 "superblock": false, 00:09:45.452 "method": "bdev_raid_create", 00:09:45.452 "req_id": 1 00:09:45.452 } 00:09:45.452 Got JSON-RPC error response 00:09:45.452 response: 00:09:45.452 { 00:09:45.452 "code": -17, 00:09:45.452 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:45.452 } 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 [2024-11-20 08:43:16.312982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.452 [2024-11-20 08:43:16.313179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.452 [2024-11-20 08:43:16.313256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:45.452 [2024-11-20 08:43:16.313455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.452 [2024-11-20 08:43:16.316325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.452 [2024-11-20 08:43:16.316502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.452 [2024-11-20 08:43:16.316702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.452 pt1 00:09:45.452 [2024-11-20 08:43:16.316887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.452 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.712 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.712 "name": "raid_bdev1", 00:09:45.712 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:45.712 "strip_size_kb": 64, 00:09:45.712 "state": "configuring", 00:09:45.712 "raid_level": "concat", 00:09:45.712 "superblock": true, 00:09:45.712 "num_base_bdevs": 2, 00:09:45.712 "num_base_bdevs_discovered": 1, 00:09:45.712 "num_base_bdevs_operational": 2, 00:09:45.712 "base_bdevs_list": [ 00:09:45.712 { 00:09:45.712 "name": "pt1", 00:09:45.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.712 "is_configured": true, 00:09:45.712 "data_offset": 2048, 00:09:45.712 "data_size": 63488 00:09:45.712 }, 00:09:45.712 { 00:09:45.712 "name": null, 00:09:45.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.712 "is_configured": false, 00:09:45.712 "data_offset": 2048, 00:09:45.712 "data_size": 63488 00:09:45.712 } 00:09:45.712 ] 00:09:45.712 }' 00:09:45.712 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.712 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.971 [2024-11-20 08:43:16.837327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.971 [2024-11-20 08:43:16.837416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.971 [2024-11-20 08:43:16.837458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:45.971 [2024-11-20 08:43:16.837476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.971 [2024-11-20 08:43:16.838051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.971 [2024-11-20 08:43:16.838103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.971 [2024-11-20 08:43:16.838224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.971 [2024-11-20 08:43:16.838262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.971 [2024-11-20 08:43:16.838412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.971 [2024-11-20 08:43:16.838440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:45.971 [2024-11-20 08:43:16.838748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:45.971 [2024-11-20 08:43:16.838939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.971 [2024-11-20 08:43:16.838955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:45.971 [2024-11-20 08:43:16.839153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.971 pt2 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.971 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.230 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.230 "name": "raid_bdev1", 00:09:46.230 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:46.230 "strip_size_kb": 64, 00:09:46.230 "state": "online", 00:09:46.230 "raid_level": "concat", 00:09:46.230 "superblock": true, 00:09:46.230 "num_base_bdevs": 2, 00:09:46.230 "num_base_bdevs_discovered": 2, 00:09:46.230 "num_base_bdevs_operational": 2, 00:09:46.230 "base_bdevs_list": [ 00:09:46.230 { 00:09:46.230 "name": "pt1", 00:09:46.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.230 "is_configured": true, 00:09:46.230 "data_offset": 2048, 00:09:46.230 "data_size": 63488 00:09:46.230 }, 00:09:46.230 { 00:09:46.230 "name": "pt2", 00:09:46.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.230 "is_configured": true, 00:09:46.230 "data_offset": 2048, 00:09:46.230 "data_size": 63488 00:09:46.230 } 00:09:46.230 ] 00:09:46.230 }' 00:09:46.230 08:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.230 08:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.491 [2024-11-20 08:43:17.361811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.491 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.491 "name": "raid_bdev1", 00:09:46.491 "aliases": [ 00:09:46.491 "609aad3a-0586-4a2f-9b8e-e86d246405f5" 00:09:46.491 ], 00:09:46.491 "product_name": "Raid Volume", 00:09:46.491 "block_size": 512, 00:09:46.491 "num_blocks": 126976, 00:09:46.491 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:46.491 "assigned_rate_limits": { 00:09:46.491 "rw_ios_per_sec": 0, 00:09:46.492 "rw_mbytes_per_sec": 0, 00:09:46.492 "r_mbytes_per_sec": 0, 00:09:46.492 "w_mbytes_per_sec": 0 00:09:46.492 }, 00:09:46.492 "claimed": false, 00:09:46.492 "zoned": false, 00:09:46.492 "supported_io_types": { 00:09:46.492 "read": true, 00:09:46.492 "write": true, 00:09:46.492 "unmap": true, 00:09:46.492 "flush": true, 00:09:46.492 "reset": true, 00:09:46.492 "nvme_admin": false, 00:09:46.492 "nvme_io": false, 00:09:46.492 "nvme_io_md": false, 00:09:46.492 "write_zeroes": true, 00:09:46.492 "zcopy": false, 00:09:46.492 "get_zone_info": false, 00:09:46.492 "zone_management": false, 00:09:46.492 "zone_append": false, 00:09:46.492 "compare": false, 00:09:46.492 "compare_and_write": false, 00:09:46.492 "abort": false, 00:09:46.492 "seek_hole": false, 00:09:46.492 "seek_data": false, 00:09:46.492 "copy": false, 00:09:46.492 "nvme_iov_md": false 00:09:46.492 }, 00:09:46.492 "memory_domains": [ 00:09:46.492 { 00:09:46.492 "dma_device_id": "system", 00:09:46.492 "dma_device_type": 1 00:09:46.492 }, 00:09:46.492 { 00:09:46.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.492 "dma_device_type": 2 00:09:46.492 }, 00:09:46.492 { 00:09:46.492 "dma_device_id": "system", 00:09:46.492 "dma_device_type": 1 00:09:46.492 }, 00:09:46.492 { 00:09:46.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.492 "dma_device_type": 2 00:09:46.492 } 00:09:46.492 ], 00:09:46.492 "driver_specific": { 00:09:46.492 "raid": { 00:09:46.492 "uuid": "609aad3a-0586-4a2f-9b8e-e86d246405f5", 00:09:46.492 "strip_size_kb": 64, 00:09:46.492 "state": "online", 00:09:46.492 "raid_level": "concat", 00:09:46.492 "superblock": true, 00:09:46.492 "num_base_bdevs": 2, 00:09:46.492 "num_base_bdevs_discovered": 2, 00:09:46.492 "num_base_bdevs_operational": 2, 00:09:46.492 "base_bdevs_list": [ 00:09:46.492 { 00:09:46.492 "name": "pt1", 00:09:46.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.492 "is_configured": true, 00:09:46.492 "data_offset": 2048, 00:09:46.492 "data_size": 63488 00:09:46.492 }, 00:09:46.492 { 00:09:46.492 "name": "pt2", 00:09:46.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.492 "is_configured": true, 00:09:46.492 "data_offset": 2048, 00:09:46.492 "data_size": 63488 00:09:46.492 } 00:09:46.492 ] 00:09:46.492 } 00:09:46.492 } 00:09:46.492 }' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.751 pt2' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:46.751 [2024-11-20 08:43:17.621862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.751 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.009 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 609aad3a-0586-4a2f-9b8e-e86d246405f5 '!=' 609aad3a-0586-4a2f-9b8e-e86d246405f5 ']' 00:09:47.009 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62159 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62159 ']' 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62159 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62159 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.010 killing process with pid 62159 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62159' 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62159 00:09:47.010 [2024-11-20 08:43:17.711381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.010 08:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62159 00:09:47.010 [2024-11-20 08:43:17.711509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.010 [2024-11-20 08:43:17.711587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.010 [2024-11-20 08:43:17.711606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.010 [2024-11-20 08:43:17.900500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.389 08:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:48.389 00:09:48.389 real 0m4.767s 00:09:48.389 user 0m7.008s 00:09:48.389 sys 0m0.680s 00:09:48.389 ************************************ 00:09:48.389 END TEST raid_superblock_test 00:09:48.389 ************************************ 00:09:48.389 08:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.389 08:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.389 08:43:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:48.389 08:43:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.389 08:43:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.389 08:43:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.389 ************************************ 00:09:48.389 START TEST raid_read_error_test 00:09:48.389 ************************************ 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.389 08:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aIxFBGXb6u 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62371 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62371 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62371 ']' 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.389 08:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.389 [2024-11-20 08:43:19.119207] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:48.389 [2024-11-20 08:43:19.119383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62371 ] 00:09:48.648 [2024-11-20 08:43:19.305778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.648 [2024-11-20 08:43:19.450324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.907 [2024-11-20 08:43:19.665630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.907 [2024-11-20 08:43:19.665699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 BaseBdev1_malloc 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 true 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 [2024-11-20 08:43:20.195622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.477 [2024-11-20 08:43:20.195699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.477 [2024-11-20 08:43:20.195730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.477 [2024-11-20 08:43:20.195748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.477 [2024-11-20 08:43:20.198578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.477 [2024-11-20 08:43:20.198630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.477 BaseBdev1 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 BaseBdev2_malloc 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 true 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.477 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.477 [2024-11-20 08:43:20.262969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.477 [2024-11-20 08:43:20.263217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.477 [2024-11-20 08:43:20.263254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.478 [2024-11-20 08:43:20.263273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.478 [2024-11-20 08:43:20.266201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.478 [2024-11-20 08:43:20.266411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.478 BaseBdev2 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.478 [2024-11-20 08:43:20.271246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.478 [2024-11-20 08:43:20.273630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.478 [2024-11-20 08:43:20.273881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.478 [2024-11-20 08:43:20.273905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:49.478 [2024-11-20 08:43:20.274217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:49.478 [2024-11-20 08:43:20.274448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.478 [2024-11-20 08:43:20.274468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.478 [2024-11-20 08:43:20.274652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.478 "name": "raid_bdev1", 00:09:49.478 "uuid": "4fec5d9d-21ab-45a9-8e5d-cc05e7290500", 00:09:49.478 "strip_size_kb": 64, 00:09:49.478 "state": "online", 00:09:49.478 "raid_level": "concat", 00:09:49.478 "superblock": true, 00:09:49.478 "num_base_bdevs": 2, 00:09:49.478 "num_base_bdevs_discovered": 2, 00:09:49.478 "num_base_bdevs_operational": 2, 00:09:49.478 "base_bdevs_list": [ 00:09:49.478 { 00:09:49.478 "name": "BaseBdev1", 00:09:49.478 "uuid": "181cd0d0-8e88-5404-a372-375135c4ce21", 00:09:49.478 "is_configured": true, 00:09:49.478 "data_offset": 2048, 00:09:49.478 "data_size": 63488 00:09:49.478 }, 00:09:49.478 { 00:09:49.478 "name": "BaseBdev2", 00:09:49.478 "uuid": "806bef18-b337-504f-b276-a03b457a03b7", 00:09:49.478 "is_configured": true, 00:09:49.478 "data_offset": 2048, 00:09:49.478 "data_size": 63488 00:09:49.478 } 00:09:49.478 ] 00:09:49.478 }' 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.478 08:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.045 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.045 08:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.045 [2024-11-20 08:43:20.917291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.982 "name": "raid_bdev1", 00:09:50.982 "uuid": "4fec5d9d-21ab-45a9-8e5d-cc05e7290500", 00:09:50.982 "strip_size_kb": 64, 00:09:50.982 "state": "online", 00:09:50.982 "raid_level": "concat", 00:09:50.982 "superblock": true, 00:09:50.982 "num_base_bdevs": 2, 00:09:50.982 "num_base_bdevs_discovered": 2, 00:09:50.982 "num_base_bdevs_operational": 2, 00:09:50.982 "base_bdevs_list": [ 00:09:50.982 { 00:09:50.982 "name": "BaseBdev1", 00:09:50.982 "uuid": "181cd0d0-8e88-5404-a372-375135c4ce21", 00:09:50.982 "is_configured": true, 00:09:50.982 "data_offset": 2048, 00:09:50.982 "data_size": 63488 00:09:50.982 }, 00:09:50.982 { 00:09:50.982 "name": "BaseBdev2", 00:09:50.982 "uuid": "806bef18-b337-504f-b276-a03b457a03b7", 00:09:50.982 "is_configured": true, 00:09:50.982 "data_offset": 2048, 00:09:50.982 "data_size": 63488 00:09:50.982 } 00:09:50.982 ] 00:09:50.982 }' 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.982 08:43:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.551 [2024-11-20 08:43:22.360093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.551 [2024-11-20 08:43:22.360305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.551 [2024-11-20 08:43:22.363890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.551 [2024-11-20 08:43:22.363947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.551 [2024-11-20 08:43:22.363991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.551 [2024-11-20 08:43:22.364013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:51.551 { 00:09:51.551 "results": [ 00:09:51.551 { 00:09:51.551 "job": "raid_bdev1", 00:09:51.551 "core_mask": "0x1", 00:09:51.551 "workload": "randrw", 00:09:51.551 "percentage": 50, 00:09:51.551 "status": "finished", 00:09:51.551 "queue_depth": 1, 00:09:51.551 "io_size": 131072, 00:09:51.551 "runtime": 1.440561, 00:09:51.551 "iops": 10315.42572650516, 00:09:51.551 "mibps": 1289.428215813145, 00:09:51.551 "io_failed": 1, 00:09:51.551 "io_timeout": 0, 00:09:51.551 "avg_latency_us": 135.4083412960097, 00:09:51.551 "min_latency_us": 38.167272727272724, 00:09:51.551 "max_latency_us": 2055.447272727273 00:09:51.551 } 00:09:51.551 ], 00:09:51.551 "core_count": 1 00:09:51.551 } 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62371 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62371 ']' 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62371 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62371 00:09:51.551 killing process with pid 62371 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62371' 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62371 00:09:51.551 [2024-11-20 08:43:22.397999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.551 08:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62371 00:09:51.810 [2024-11-20 08:43:22.521387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aIxFBGXb6u 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.187 ************************************ 00:09:53.187 END TEST raid_read_error_test 00:09:53.187 ************************************ 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:09:53.187 00:09:53.187 real 0m4.690s 00:09:53.187 user 0m5.908s 00:09:53.187 sys 0m0.575s 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.187 08:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.187 08:43:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:53.187 08:43:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.187 08:43:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.187 08:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.187 ************************************ 00:09:53.187 START TEST raid_write_error_test 00:09:53.187 ************************************ 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tHJR4Kkv1s 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62522 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62522 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62522 ']' 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.187 08:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.187 [2024-11-20 08:43:23.860523] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:53.187 [2024-11-20 08:43:23.860709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62522 ] 00:09:53.187 [2024-11-20 08:43:24.044835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.446 [2024-11-20 08:43:24.184226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.706 [2024-11-20 08:43:24.399679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.706 [2024-11-20 08:43:24.399735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.965 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.965 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.965 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.965 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.965 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.965 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 BaseBdev1_malloc 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 true 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 [2024-11-20 08:43:24.934194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.224 [2024-11-20 08:43:24.934269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.224 [2024-11-20 08:43:24.934300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.224 [2024-11-20 08:43:24.934318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.224 [2024-11-20 08:43:24.937385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.224 [2024-11-20 08:43:24.937446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.224 BaseBdev1 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 BaseBdev2_malloc 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 true 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 [2024-11-20 08:43:24.992169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.224 [2024-11-20 08:43:24.992250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.224 [2024-11-20 08:43:24.992276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:54.224 [2024-11-20 08:43:24.992293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.224 [2024-11-20 08:43:24.995103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.224 [2024-11-20 08:43:24.995317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.224 BaseBdev2 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 [2024-11-20 08:43:25.000288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.224 [2024-11-20 08:43:25.002814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.224 [2024-11-20 08:43:25.003255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.224 [2024-11-20 08:43:25.003287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:54.224 [2024-11-20 08:43:25.003615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:54.224 [2024-11-20 08:43:25.003851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.224 [2024-11-20 08:43:25.003871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:54.224 [2024-11-20 08:43:25.004060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.224 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.224 "name": "raid_bdev1", 00:09:54.224 "uuid": "b39721ce-d9cf-4790-ab95-2637d73c846c", 00:09:54.224 "strip_size_kb": 64, 00:09:54.224 "state": "online", 00:09:54.224 "raid_level": "concat", 00:09:54.224 "superblock": true, 00:09:54.224 "num_base_bdevs": 2, 00:09:54.224 "num_base_bdevs_discovered": 2, 00:09:54.224 "num_base_bdevs_operational": 2, 00:09:54.224 "base_bdevs_list": [ 00:09:54.224 { 00:09:54.224 "name": "BaseBdev1", 00:09:54.224 "uuid": "31aec824-7a4d-5693-b11f-bd9b8c756269", 00:09:54.224 "is_configured": true, 00:09:54.224 "data_offset": 2048, 00:09:54.224 "data_size": 63488 00:09:54.224 }, 00:09:54.224 { 00:09:54.224 "name": "BaseBdev2", 00:09:54.224 "uuid": "c55c7eef-4870-5634-934c-85f79f467a6f", 00:09:54.224 "is_configured": true, 00:09:54.224 "data_offset": 2048, 00:09:54.224 "data_size": 63488 00:09:54.224 } 00:09:54.224 ] 00:09:54.225 }' 00:09:54.225 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.225 08:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.793 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.793 08:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.793 [2024-11-20 08:43:25.641957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.729 "name": "raid_bdev1", 00:09:55.729 "uuid": "b39721ce-d9cf-4790-ab95-2637d73c846c", 00:09:55.729 "strip_size_kb": 64, 00:09:55.729 "state": "online", 00:09:55.729 "raid_level": "concat", 00:09:55.729 "superblock": true, 00:09:55.729 "num_base_bdevs": 2, 00:09:55.729 "num_base_bdevs_discovered": 2, 00:09:55.729 "num_base_bdevs_operational": 2, 00:09:55.729 "base_bdevs_list": [ 00:09:55.729 { 00:09:55.729 "name": "BaseBdev1", 00:09:55.729 "uuid": "31aec824-7a4d-5693-b11f-bd9b8c756269", 00:09:55.729 "is_configured": true, 00:09:55.729 "data_offset": 2048, 00:09:55.729 "data_size": 63488 00:09:55.729 }, 00:09:55.729 { 00:09:55.729 "name": "BaseBdev2", 00:09:55.729 "uuid": "c55c7eef-4870-5634-934c-85f79f467a6f", 00:09:55.729 "is_configured": true, 00:09:55.729 "data_offset": 2048, 00:09:55.729 "data_size": 63488 00:09:55.729 } 00:09:55.729 ] 00:09:55.729 }' 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.729 08:43:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.298 [2024-11-20 08:43:27.051633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.298 [2024-11-20 08:43:27.051821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.298 { 00:09:56.298 "results": [ 00:09:56.298 { 00:09:56.298 "job": "raid_bdev1", 00:09:56.298 "core_mask": "0x1", 00:09:56.298 "workload": "randrw", 00:09:56.298 "percentage": 50, 00:09:56.298 "status": "finished", 00:09:56.298 "queue_depth": 1, 00:09:56.298 "io_size": 131072, 00:09:56.298 "runtime": 1.407836, 00:09:56.298 "iops": 9892.487477234565, 00:09:56.298 "mibps": 1236.5609346543206, 00:09:56.298 "io_failed": 1, 00:09:56.298 "io_timeout": 0, 00:09:56.298 "avg_latency_us": 140.52852383687537, 00:09:56.298 "min_latency_us": 39.33090909090909, 00:09:56.298 "max_latency_us": 1899.0545454545454 00:09:56.298 } 00:09:56.298 ], 00:09:56.298 "core_count": 1 00:09:56.298 } 00:09:56.298 [2024-11-20 08:43:27.059302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.298 [2024-11-20 08:43:27.059537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.298 [2024-11-20 08:43:27.059633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.298 [2024-11-20 08:43:27.059677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62522 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62522 ']' 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62522 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62522 00:09:56.298 killing process with pid 62522 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62522' 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62522 00:09:56.298 [2024-11-20 08:43:27.101265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.298 08:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62522 00:09:56.557 [2024-11-20 08:43:27.251827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tHJR4Kkv1s 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.493 ************************************ 00:09:57.493 END TEST raid_write_error_test 00:09:57.493 ************************************ 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:57.493 00:09:57.493 real 0m4.624s 00:09:57.493 user 0m5.750s 00:09:57.493 sys 0m0.624s 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.493 08:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.493 08:43:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.493 08:43:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:57.493 08:43:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.493 08:43:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.493 08:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.753 ************************************ 00:09:57.753 START TEST raid_state_function_test 00:09:57.753 ************************************ 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.753 Process raid pid: 62660 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62660 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62660' 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62660 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62660 ']' 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.753 08:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.753 [2024-11-20 08:43:28.524421] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:09:57.753 [2024-11-20 08:43:28.524614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.039 [2024-11-20 08:43:28.710804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.039 [2024-11-20 08:43:28.833308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.298 [2024-11-20 08:43:29.035996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.298 [2024-11-20 08:43:29.036059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.864 [2024-11-20 08:43:29.489729] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.864 [2024-11-20 08:43:29.489999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.864 [2024-11-20 08:43:29.490035] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.864 [2024-11-20 08:43:29.490060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.864 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.865 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.865 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.865 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.865 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.865 "name": "Existed_Raid", 00:09:58.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.865 "strip_size_kb": 0, 00:09:58.865 "state": "configuring", 00:09:58.865 "raid_level": "raid1", 00:09:58.865 "superblock": false, 00:09:58.865 "num_base_bdevs": 2, 00:09:58.865 "num_base_bdevs_discovered": 0, 00:09:58.865 "num_base_bdevs_operational": 2, 00:09:58.865 "base_bdevs_list": [ 00:09:58.865 { 00:09:58.865 "name": "BaseBdev1", 00:09:58.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.865 "is_configured": false, 00:09:58.865 "data_offset": 0, 00:09:58.865 "data_size": 0 00:09:58.865 }, 00:09:58.865 { 00:09:58.865 "name": "BaseBdev2", 00:09:58.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.865 "is_configured": false, 00:09:58.865 "data_offset": 0, 00:09:58.865 "data_size": 0 00:09:58.865 } 00:09:58.865 ] 00:09:58.865 }' 00:09:58.865 08:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.865 08:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.124 [2024-11-20 08:43:30.025860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.124 [2024-11-20 08:43:30.025915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.124 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.124 [2024-11-20 08:43:30.033823] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.124 [2024-11-20 08:43:30.033878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.124 [2024-11-20 08:43:30.033895] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.124 [2024-11-20 08:43:30.033916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 [2024-11-20 08:43:30.079984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.383 BaseBdev1 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 [ 00:09:59.383 { 00:09:59.383 "name": "BaseBdev1", 00:09:59.383 "aliases": [ 00:09:59.383 "c04f0509-7b90-4ae4-b130-04b80fbc2ad6" 00:09:59.383 ], 00:09:59.383 "product_name": "Malloc disk", 00:09:59.383 "block_size": 512, 00:09:59.383 "num_blocks": 65536, 00:09:59.383 "uuid": "c04f0509-7b90-4ae4-b130-04b80fbc2ad6", 00:09:59.383 "assigned_rate_limits": { 00:09:59.383 "rw_ios_per_sec": 0, 00:09:59.383 "rw_mbytes_per_sec": 0, 00:09:59.383 "r_mbytes_per_sec": 0, 00:09:59.383 "w_mbytes_per_sec": 0 00:09:59.383 }, 00:09:59.383 "claimed": true, 00:09:59.383 "claim_type": "exclusive_write", 00:09:59.383 "zoned": false, 00:09:59.383 "supported_io_types": { 00:09:59.383 "read": true, 00:09:59.383 "write": true, 00:09:59.383 "unmap": true, 00:09:59.383 "flush": true, 00:09:59.383 "reset": true, 00:09:59.383 "nvme_admin": false, 00:09:59.383 "nvme_io": false, 00:09:59.383 "nvme_io_md": false, 00:09:59.383 "write_zeroes": true, 00:09:59.383 "zcopy": true, 00:09:59.383 "get_zone_info": false, 00:09:59.383 "zone_management": false, 00:09:59.383 "zone_append": false, 00:09:59.383 "compare": false, 00:09:59.383 "compare_and_write": false, 00:09:59.383 "abort": true, 00:09:59.383 "seek_hole": false, 00:09:59.383 "seek_data": false, 00:09:59.383 "copy": true, 00:09:59.383 "nvme_iov_md": false 00:09:59.383 }, 00:09:59.383 "memory_domains": [ 00:09:59.383 { 00:09:59.383 "dma_device_id": "system", 00:09:59.383 "dma_device_type": 1 00:09:59.383 }, 00:09:59.383 { 00:09:59.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.383 "dma_device_type": 2 00:09:59.383 } 00:09:59.383 ], 00:09:59.383 "driver_specific": {} 00:09:59.383 } 00:09:59.383 ] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.383 "name": "Existed_Raid", 00:09:59.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.383 "strip_size_kb": 0, 00:09:59.383 "state": "configuring", 00:09:59.383 "raid_level": "raid1", 00:09:59.383 "superblock": false, 00:09:59.383 "num_base_bdevs": 2, 00:09:59.383 "num_base_bdevs_discovered": 1, 00:09:59.383 "num_base_bdevs_operational": 2, 00:09:59.383 "base_bdevs_list": [ 00:09:59.383 { 00:09:59.383 "name": "BaseBdev1", 00:09:59.383 "uuid": "c04f0509-7b90-4ae4-b130-04b80fbc2ad6", 00:09:59.383 "is_configured": true, 00:09:59.383 "data_offset": 0, 00:09:59.383 "data_size": 65536 00:09:59.383 }, 00:09:59.383 { 00:09:59.383 "name": "BaseBdev2", 00:09:59.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.383 "is_configured": false, 00:09:59.383 "data_offset": 0, 00:09:59.383 "data_size": 0 00:09:59.383 } 00:09:59.383 ] 00:09:59.383 }' 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.383 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.953 [2024-11-20 08:43:30.652185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.953 [2024-11-20 08:43:30.652282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.953 [2024-11-20 08:43:30.660214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.953 [2024-11-20 08:43:30.662736] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.953 [2024-11-20 08:43:30.662978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.953 "name": "Existed_Raid", 00:09:59.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.953 "strip_size_kb": 0, 00:09:59.953 "state": "configuring", 00:09:59.953 "raid_level": "raid1", 00:09:59.953 "superblock": false, 00:09:59.953 "num_base_bdevs": 2, 00:09:59.953 "num_base_bdevs_discovered": 1, 00:09:59.953 "num_base_bdevs_operational": 2, 00:09:59.953 "base_bdevs_list": [ 00:09:59.953 { 00:09:59.953 "name": "BaseBdev1", 00:09:59.953 "uuid": "c04f0509-7b90-4ae4-b130-04b80fbc2ad6", 00:09:59.953 "is_configured": true, 00:09:59.953 "data_offset": 0, 00:09:59.953 "data_size": 65536 00:09:59.953 }, 00:09:59.953 { 00:09:59.953 "name": "BaseBdev2", 00:09:59.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.953 "is_configured": false, 00:09:59.953 "data_offset": 0, 00:09:59.953 "data_size": 0 00:09:59.953 } 00:09:59.953 ] 00:09:59.953 }' 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.953 08:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.522 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.522 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.522 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.522 [2024-11-20 08:43:31.206509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.523 [2024-11-20 08:43:31.206569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.523 [2024-11-20 08:43:31.206582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:00.523 [2024-11-20 08:43:31.206926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:00.523 [2024-11-20 08:43:31.207157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.523 [2024-11-20 08:43:31.207209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.523 [2024-11-20 08:43:31.207568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.523 BaseBdev2 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.523 [ 00:10:00.523 { 00:10:00.523 "name": "BaseBdev2", 00:10:00.523 "aliases": [ 00:10:00.523 "9d2f8e00-3b08-44a6-971c-adc097fbcba3" 00:10:00.523 ], 00:10:00.523 "product_name": "Malloc disk", 00:10:00.523 "block_size": 512, 00:10:00.523 "num_blocks": 65536, 00:10:00.523 "uuid": "9d2f8e00-3b08-44a6-971c-adc097fbcba3", 00:10:00.523 "assigned_rate_limits": { 00:10:00.523 "rw_ios_per_sec": 0, 00:10:00.523 "rw_mbytes_per_sec": 0, 00:10:00.523 "r_mbytes_per_sec": 0, 00:10:00.523 "w_mbytes_per_sec": 0 00:10:00.523 }, 00:10:00.523 "claimed": true, 00:10:00.523 "claim_type": "exclusive_write", 00:10:00.523 "zoned": false, 00:10:00.523 "supported_io_types": { 00:10:00.523 "read": true, 00:10:00.523 "write": true, 00:10:00.523 "unmap": true, 00:10:00.523 "flush": true, 00:10:00.523 "reset": true, 00:10:00.523 "nvme_admin": false, 00:10:00.523 "nvme_io": false, 00:10:00.523 "nvme_io_md": false, 00:10:00.523 "write_zeroes": true, 00:10:00.523 "zcopy": true, 00:10:00.523 "get_zone_info": false, 00:10:00.523 "zone_management": false, 00:10:00.523 "zone_append": false, 00:10:00.523 "compare": false, 00:10:00.523 "compare_and_write": false, 00:10:00.523 "abort": true, 00:10:00.523 "seek_hole": false, 00:10:00.523 "seek_data": false, 00:10:00.523 "copy": true, 00:10:00.523 "nvme_iov_md": false 00:10:00.523 }, 00:10:00.523 "memory_domains": [ 00:10:00.523 { 00:10:00.523 "dma_device_id": "system", 00:10:00.523 "dma_device_type": 1 00:10:00.523 }, 00:10:00.523 { 00:10:00.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.523 "dma_device_type": 2 00:10:00.523 } 00:10:00.523 ], 00:10:00.523 "driver_specific": {} 00:10:00.523 } 00:10:00.523 ] 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.523 "name": "Existed_Raid", 00:10:00.523 "uuid": "2b396081-c0b6-4f00-9290-c91889c85880", 00:10:00.523 "strip_size_kb": 0, 00:10:00.523 "state": "online", 00:10:00.523 "raid_level": "raid1", 00:10:00.523 "superblock": false, 00:10:00.523 "num_base_bdevs": 2, 00:10:00.523 "num_base_bdevs_discovered": 2, 00:10:00.523 "num_base_bdevs_operational": 2, 00:10:00.523 "base_bdevs_list": [ 00:10:00.523 { 00:10:00.523 "name": "BaseBdev1", 00:10:00.523 "uuid": "c04f0509-7b90-4ae4-b130-04b80fbc2ad6", 00:10:00.523 "is_configured": true, 00:10:00.523 "data_offset": 0, 00:10:00.523 "data_size": 65536 00:10:00.523 }, 00:10:00.523 { 00:10:00.523 "name": "BaseBdev2", 00:10:00.523 "uuid": "9d2f8e00-3b08-44a6-971c-adc097fbcba3", 00:10:00.523 "is_configured": true, 00:10:00.523 "data_offset": 0, 00:10:00.523 "data_size": 65536 00:10:00.523 } 00:10:00.523 ] 00:10:00.523 }' 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.523 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.092 [2024-11-20 08:43:31.795156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.092 "name": "Existed_Raid", 00:10:01.092 "aliases": [ 00:10:01.092 "2b396081-c0b6-4f00-9290-c91889c85880" 00:10:01.092 ], 00:10:01.092 "product_name": "Raid Volume", 00:10:01.092 "block_size": 512, 00:10:01.092 "num_blocks": 65536, 00:10:01.092 "uuid": "2b396081-c0b6-4f00-9290-c91889c85880", 00:10:01.092 "assigned_rate_limits": { 00:10:01.092 "rw_ios_per_sec": 0, 00:10:01.092 "rw_mbytes_per_sec": 0, 00:10:01.092 "r_mbytes_per_sec": 0, 00:10:01.092 "w_mbytes_per_sec": 0 00:10:01.092 }, 00:10:01.092 "claimed": false, 00:10:01.092 "zoned": false, 00:10:01.092 "supported_io_types": { 00:10:01.092 "read": true, 00:10:01.092 "write": true, 00:10:01.092 "unmap": false, 00:10:01.092 "flush": false, 00:10:01.092 "reset": true, 00:10:01.092 "nvme_admin": false, 00:10:01.092 "nvme_io": false, 00:10:01.092 "nvme_io_md": false, 00:10:01.092 "write_zeroes": true, 00:10:01.092 "zcopy": false, 00:10:01.092 "get_zone_info": false, 00:10:01.092 "zone_management": false, 00:10:01.092 "zone_append": false, 00:10:01.092 "compare": false, 00:10:01.092 "compare_and_write": false, 00:10:01.092 "abort": false, 00:10:01.092 "seek_hole": false, 00:10:01.092 "seek_data": false, 00:10:01.092 "copy": false, 00:10:01.092 "nvme_iov_md": false 00:10:01.092 }, 00:10:01.092 "memory_domains": [ 00:10:01.092 { 00:10:01.092 "dma_device_id": "system", 00:10:01.092 "dma_device_type": 1 00:10:01.092 }, 00:10:01.092 { 00:10:01.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.092 "dma_device_type": 2 00:10:01.092 }, 00:10:01.092 { 00:10:01.092 "dma_device_id": "system", 00:10:01.092 "dma_device_type": 1 00:10:01.092 }, 00:10:01.092 { 00:10:01.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.092 "dma_device_type": 2 00:10:01.092 } 00:10:01.092 ], 00:10:01.092 "driver_specific": { 00:10:01.092 "raid": { 00:10:01.092 "uuid": "2b396081-c0b6-4f00-9290-c91889c85880", 00:10:01.092 "strip_size_kb": 0, 00:10:01.092 "state": "online", 00:10:01.092 "raid_level": "raid1", 00:10:01.092 "superblock": false, 00:10:01.092 "num_base_bdevs": 2, 00:10:01.092 "num_base_bdevs_discovered": 2, 00:10:01.092 "num_base_bdevs_operational": 2, 00:10:01.092 "base_bdevs_list": [ 00:10:01.092 { 00:10:01.092 "name": "BaseBdev1", 00:10:01.092 "uuid": "c04f0509-7b90-4ae4-b130-04b80fbc2ad6", 00:10:01.092 "is_configured": true, 00:10:01.092 "data_offset": 0, 00:10:01.092 "data_size": 65536 00:10:01.092 }, 00:10:01.092 { 00:10:01.092 "name": "BaseBdev2", 00:10:01.092 "uuid": "9d2f8e00-3b08-44a6-971c-adc097fbcba3", 00:10:01.092 "is_configured": true, 00:10:01.092 "data_offset": 0, 00:10:01.092 "data_size": 65536 00:10:01.092 } 00:10:01.092 ] 00:10:01.092 } 00:10:01.092 } 00:10:01.092 }' 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.092 BaseBdev2' 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.092 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.093 08:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.351 [2024-11-20 08:43:32.054876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.351 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.352 "name": "Existed_Raid", 00:10:01.352 "uuid": "2b396081-c0b6-4f00-9290-c91889c85880", 00:10:01.352 "strip_size_kb": 0, 00:10:01.352 "state": "online", 00:10:01.352 "raid_level": "raid1", 00:10:01.352 "superblock": false, 00:10:01.352 "num_base_bdevs": 2, 00:10:01.352 "num_base_bdevs_discovered": 1, 00:10:01.352 "num_base_bdevs_operational": 1, 00:10:01.352 "base_bdevs_list": [ 00:10:01.352 { 00:10:01.352 "name": null, 00:10:01.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.352 "is_configured": false, 00:10:01.352 "data_offset": 0, 00:10:01.352 "data_size": 65536 00:10:01.352 }, 00:10:01.352 { 00:10:01.352 "name": "BaseBdev2", 00:10:01.352 "uuid": "9d2f8e00-3b08-44a6-971c-adc097fbcba3", 00:10:01.352 "is_configured": true, 00:10:01.352 "data_offset": 0, 00:10:01.352 "data_size": 65536 00:10:01.352 } 00:10:01.352 ] 00:10:01.352 }' 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.352 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.918 [2024-11-20 08:43:32.726943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.918 [2024-11-20 08:43:32.727263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.918 [2024-11-20 08:43:32.817648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.918 [2024-11-20 08:43:32.817720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.918 [2024-11-20 08:43:32.817741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.918 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62660 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62660 ']' 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62660 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62660 00:10:02.177 killing process with pid 62660 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62660' 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62660 00:10:02.177 [2024-11-20 08:43:32.899942] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.177 08:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62660 00:10:02.177 [2024-11-20 08:43:32.914832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.115 ************************************ 00:10:03.115 END TEST raid_state_function_test 00:10:03.115 ************************************ 00:10:03.115 08:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.115 00:10:03.115 real 0m5.550s 00:10:03.115 user 0m8.379s 00:10:03.115 sys 0m0.775s 00:10:03.115 08:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.115 08:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.115 08:43:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:03.115 08:43:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.115 08:43:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.116 08:43:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.116 ************************************ 00:10:03.116 START TEST raid_state_function_test_sb 00:10:03.116 ************************************ 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:03.116 Process raid pid: 62919 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62919 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62919' 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62919 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62919 ']' 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.116 08:43:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.375 [2024-11-20 08:43:34.108266] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:03.375 [2024-11-20 08:43:34.108432] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.375 [2024-11-20 08:43:34.283975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.634 [2024-11-20 08:43:34.415992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.894 [2024-11-20 08:43:34.623413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.894 [2024-11-20 08:43:34.623471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.462 [2024-11-20 08:43:35.105366] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.462 [2024-11-20 08:43:35.105432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.462 [2024-11-20 08:43:35.105450] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.462 [2024-11-20 08:43:35.105467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.462 "name": "Existed_Raid", 00:10:04.462 "uuid": "26f12714-4b55-4270-b706-ec4763b7da36", 00:10:04.462 "strip_size_kb": 0, 00:10:04.462 "state": "configuring", 00:10:04.462 "raid_level": "raid1", 00:10:04.462 "superblock": true, 00:10:04.462 "num_base_bdevs": 2, 00:10:04.462 "num_base_bdevs_discovered": 0, 00:10:04.462 "num_base_bdevs_operational": 2, 00:10:04.462 "base_bdevs_list": [ 00:10:04.462 { 00:10:04.462 "name": "BaseBdev1", 00:10:04.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.462 "is_configured": false, 00:10:04.462 "data_offset": 0, 00:10:04.462 "data_size": 0 00:10:04.462 }, 00:10:04.462 { 00:10:04.462 "name": "BaseBdev2", 00:10:04.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.462 "is_configured": false, 00:10:04.462 "data_offset": 0, 00:10:04.462 "data_size": 0 00:10:04.462 } 00:10:04.462 ] 00:10:04.462 }' 00:10:04.462 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.463 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.721 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.721 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.721 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.721 [2024-11-20 08:43:35.633452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.721 [2024-11-20 08:43:35.633664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 [2024-11-20 08:43:35.641439] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.981 [2024-11-20 08:43:35.641493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.981 [2024-11-20 08:43:35.641509] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.981 [2024-11-20 08:43:35.641529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 [2024-11-20 08:43:35.686981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.981 BaseBdev1 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.981 [ 00:10:04.981 { 00:10:04.981 "name": "BaseBdev1", 00:10:04.981 "aliases": [ 00:10:04.981 "525f8c41-f05e-4e69-be36-7c88969a391f" 00:10:04.981 ], 00:10:04.981 "product_name": "Malloc disk", 00:10:04.981 "block_size": 512, 00:10:04.981 "num_blocks": 65536, 00:10:04.981 "uuid": "525f8c41-f05e-4e69-be36-7c88969a391f", 00:10:04.981 "assigned_rate_limits": { 00:10:04.981 "rw_ios_per_sec": 0, 00:10:04.981 "rw_mbytes_per_sec": 0, 00:10:04.981 "r_mbytes_per_sec": 0, 00:10:04.981 "w_mbytes_per_sec": 0 00:10:04.981 }, 00:10:04.981 "claimed": true, 00:10:04.981 "claim_type": "exclusive_write", 00:10:04.981 "zoned": false, 00:10:04.981 "supported_io_types": { 00:10:04.981 "read": true, 00:10:04.981 "write": true, 00:10:04.981 "unmap": true, 00:10:04.981 "flush": true, 00:10:04.981 "reset": true, 00:10:04.981 "nvme_admin": false, 00:10:04.981 "nvme_io": false, 00:10:04.981 "nvme_io_md": false, 00:10:04.981 "write_zeroes": true, 00:10:04.981 "zcopy": true, 00:10:04.981 "get_zone_info": false, 00:10:04.981 "zone_management": false, 00:10:04.981 "zone_append": false, 00:10:04.981 "compare": false, 00:10:04.981 "compare_and_write": false, 00:10:04.981 "abort": true, 00:10:04.981 "seek_hole": false, 00:10:04.981 "seek_data": false, 00:10:04.981 "copy": true, 00:10:04.981 "nvme_iov_md": false 00:10:04.981 }, 00:10:04.981 "memory_domains": [ 00:10:04.981 { 00:10:04.981 "dma_device_id": "system", 00:10:04.981 "dma_device_type": 1 00:10:04.981 }, 00:10:04.981 { 00:10:04.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.981 "dma_device_type": 2 00:10:04.981 } 00:10:04.981 ], 00:10:04.981 "driver_specific": {} 00:10:04.981 } 00:10:04.981 ] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.981 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.982 "name": "Existed_Raid", 00:10:04.982 "uuid": "7de0a560-fd0f-466f-a8cb-6fa1e729f8ba", 00:10:04.982 "strip_size_kb": 0, 00:10:04.982 "state": "configuring", 00:10:04.982 "raid_level": "raid1", 00:10:04.982 "superblock": true, 00:10:04.982 "num_base_bdevs": 2, 00:10:04.982 "num_base_bdevs_discovered": 1, 00:10:04.982 "num_base_bdevs_operational": 2, 00:10:04.982 "base_bdevs_list": [ 00:10:04.982 { 00:10:04.982 "name": "BaseBdev1", 00:10:04.982 "uuid": "525f8c41-f05e-4e69-be36-7c88969a391f", 00:10:04.982 "is_configured": true, 00:10:04.982 "data_offset": 2048, 00:10:04.982 "data_size": 63488 00:10:04.982 }, 00:10:04.982 { 00:10:04.982 "name": "BaseBdev2", 00:10:04.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.982 "is_configured": false, 00:10:04.982 "data_offset": 0, 00:10:04.982 "data_size": 0 00:10:04.982 } 00:10:04.982 ] 00:10:04.982 }' 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.982 08:43:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.549 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.549 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.550 [2024-11-20 08:43:36.227192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.550 [2024-11-20 08:43:36.227393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.550 [2024-11-20 08:43:36.235220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.550 [2024-11-20 08:43:36.237712] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.550 [2024-11-20 08:43:36.237769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.550 "name": "Existed_Raid", 00:10:05.550 "uuid": "29788095-f3e8-4a49-a83b-cc1017c0e648", 00:10:05.550 "strip_size_kb": 0, 00:10:05.550 "state": "configuring", 00:10:05.550 "raid_level": "raid1", 00:10:05.550 "superblock": true, 00:10:05.550 "num_base_bdevs": 2, 00:10:05.550 "num_base_bdevs_discovered": 1, 00:10:05.550 "num_base_bdevs_operational": 2, 00:10:05.550 "base_bdevs_list": [ 00:10:05.550 { 00:10:05.550 "name": "BaseBdev1", 00:10:05.550 "uuid": "525f8c41-f05e-4e69-be36-7c88969a391f", 00:10:05.550 "is_configured": true, 00:10:05.550 "data_offset": 2048, 00:10:05.550 "data_size": 63488 00:10:05.550 }, 00:10:05.550 { 00:10:05.550 "name": "BaseBdev2", 00:10:05.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.550 "is_configured": false, 00:10:05.550 "data_offset": 0, 00:10:05.550 "data_size": 0 00:10:05.550 } 00:10:05.550 ] 00:10:05.550 }' 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.550 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.121 [2024-11-20 08:43:36.790387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.121 [2024-11-20 08:43:36.790889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.121 BaseBdev2 00:10:06.121 [2024-11-20 08:43:36.791037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:06.121 [2024-11-20 08:43:36.791429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:06.121 [2024-11-20 08:43:36.791665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.121 [2024-11-20 08:43:36.791689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.121 [2024-11-20 08:43:36.791869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.121 [ 00:10:06.121 { 00:10:06.121 "name": "BaseBdev2", 00:10:06.121 "aliases": [ 00:10:06.121 "75963851-f687-4efc-8150-d3a3ae1106de" 00:10:06.121 ], 00:10:06.121 "product_name": "Malloc disk", 00:10:06.121 "block_size": 512, 00:10:06.121 "num_blocks": 65536, 00:10:06.121 "uuid": "75963851-f687-4efc-8150-d3a3ae1106de", 00:10:06.121 "assigned_rate_limits": { 00:10:06.121 "rw_ios_per_sec": 0, 00:10:06.121 "rw_mbytes_per_sec": 0, 00:10:06.121 "r_mbytes_per_sec": 0, 00:10:06.121 "w_mbytes_per_sec": 0 00:10:06.121 }, 00:10:06.121 "claimed": true, 00:10:06.121 "claim_type": "exclusive_write", 00:10:06.121 "zoned": false, 00:10:06.121 "supported_io_types": { 00:10:06.121 "read": true, 00:10:06.121 "write": true, 00:10:06.121 "unmap": true, 00:10:06.121 "flush": true, 00:10:06.121 "reset": true, 00:10:06.121 "nvme_admin": false, 00:10:06.121 "nvme_io": false, 00:10:06.121 "nvme_io_md": false, 00:10:06.121 "write_zeroes": true, 00:10:06.121 "zcopy": true, 00:10:06.121 "get_zone_info": false, 00:10:06.121 "zone_management": false, 00:10:06.121 "zone_append": false, 00:10:06.121 "compare": false, 00:10:06.121 "compare_and_write": false, 00:10:06.121 "abort": true, 00:10:06.121 "seek_hole": false, 00:10:06.121 "seek_data": false, 00:10:06.121 "copy": true, 00:10:06.121 "nvme_iov_md": false 00:10:06.121 }, 00:10:06.121 "memory_domains": [ 00:10:06.121 { 00:10:06.121 "dma_device_id": "system", 00:10:06.121 "dma_device_type": 1 00:10:06.121 }, 00:10:06.121 { 00:10:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.121 "dma_device_type": 2 00:10:06.121 } 00:10:06.121 ], 00:10:06.121 "driver_specific": {} 00:10:06.121 } 00:10:06.121 ] 00:10:06.121 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.122 "name": "Existed_Raid", 00:10:06.122 "uuid": "29788095-f3e8-4a49-a83b-cc1017c0e648", 00:10:06.122 "strip_size_kb": 0, 00:10:06.122 "state": "online", 00:10:06.122 "raid_level": "raid1", 00:10:06.122 "superblock": true, 00:10:06.122 "num_base_bdevs": 2, 00:10:06.122 "num_base_bdevs_discovered": 2, 00:10:06.122 "num_base_bdevs_operational": 2, 00:10:06.122 "base_bdevs_list": [ 00:10:06.122 { 00:10:06.122 "name": "BaseBdev1", 00:10:06.122 "uuid": "525f8c41-f05e-4e69-be36-7c88969a391f", 00:10:06.122 "is_configured": true, 00:10:06.122 "data_offset": 2048, 00:10:06.122 "data_size": 63488 00:10:06.122 }, 00:10:06.122 { 00:10:06.122 "name": "BaseBdev2", 00:10:06.122 "uuid": "75963851-f687-4efc-8150-d3a3ae1106de", 00:10:06.122 "is_configured": true, 00:10:06.122 "data_offset": 2048, 00:10:06.122 "data_size": 63488 00:10:06.122 } 00:10:06.122 ] 00:10:06.122 }' 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.122 08:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.700 [2024-11-20 08:43:37.338925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.700 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.700 "name": "Existed_Raid", 00:10:06.700 "aliases": [ 00:10:06.700 "29788095-f3e8-4a49-a83b-cc1017c0e648" 00:10:06.700 ], 00:10:06.700 "product_name": "Raid Volume", 00:10:06.700 "block_size": 512, 00:10:06.700 "num_blocks": 63488, 00:10:06.700 "uuid": "29788095-f3e8-4a49-a83b-cc1017c0e648", 00:10:06.700 "assigned_rate_limits": { 00:10:06.700 "rw_ios_per_sec": 0, 00:10:06.700 "rw_mbytes_per_sec": 0, 00:10:06.700 "r_mbytes_per_sec": 0, 00:10:06.700 "w_mbytes_per_sec": 0 00:10:06.700 }, 00:10:06.700 "claimed": false, 00:10:06.700 "zoned": false, 00:10:06.700 "supported_io_types": { 00:10:06.700 "read": true, 00:10:06.700 "write": true, 00:10:06.700 "unmap": false, 00:10:06.700 "flush": false, 00:10:06.700 "reset": true, 00:10:06.700 "nvme_admin": false, 00:10:06.700 "nvme_io": false, 00:10:06.700 "nvme_io_md": false, 00:10:06.700 "write_zeroes": true, 00:10:06.700 "zcopy": false, 00:10:06.700 "get_zone_info": false, 00:10:06.700 "zone_management": false, 00:10:06.700 "zone_append": false, 00:10:06.700 "compare": false, 00:10:06.700 "compare_and_write": false, 00:10:06.700 "abort": false, 00:10:06.700 "seek_hole": false, 00:10:06.700 "seek_data": false, 00:10:06.700 "copy": false, 00:10:06.700 "nvme_iov_md": false 00:10:06.701 }, 00:10:06.701 "memory_domains": [ 00:10:06.701 { 00:10:06.701 "dma_device_id": "system", 00:10:06.701 "dma_device_type": 1 00:10:06.701 }, 00:10:06.701 { 00:10:06.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.701 "dma_device_type": 2 00:10:06.701 }, 00:10:06.701 { 00:10:06.701 "dma_device_id": "system", 00:10:06.701 "dma_device_type": 1 00:10:06.701 }, 00:10:06.701 { 00:10:06.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.701 "dma_device_type": 2 00:10:06.701 } 00:10:06.701 ], 00:10:06.701 "driver_specific": { 00:10:06.701 "raid": { 00:10:06.701 "uuid": "29788095-f3e8-4a49-a83b-cc1017c0e648", 00:10:06.701 "strip_size_kb": 0, 00:10:06.701 "state": "online", 00:10:06.701 "raid_level": "raid1", 00:10:06.701 "superblock": true, 00:10:06.701 "num_base_bdevs": 2, 00:10:06.701 "num_base_bdevs_discovered": 2, 00:10:06.701 "num_base_bdevs_operational": 2, 00:10:06.701 "base_bdevs_list": [ 00:10:06.701 { 00:10:06.701 "name": "BaseBdev1", 00:10:06.701 "uuid": "525f8c41-f05e-4e69-be36-7c88969a391f", 00:10:06.701 "is_configured": true, 00:10:06.701 "data_offset": 2048, 00:10:06.701 "data_size": 63488 00:10:06.701 }, 00:10:06.701 { 00:10:06.701 "name": "BaseBdev2", 00:10:06.701 "uuid": "75963851-f687-4efc-8150-d3a3ae1106de", 00:10:06.701 "is_configured": true, 00:10:06.701 "data_offset": 2048, 00:10:06.701 "data_size": 63488 00:10:06.701 } 00:10:06.701 ] 00:10:06.701 } 00:10:06.701 } 00:10:06.701 }' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.701 BaseBdev2' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 [2024-11-20 08:43:37.598727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.960 "name": "Existed_Raid", 00:10:06.960 "uuid": "29788095-f3e8-4a49-a83b-cc1017c0e648", 00:10:06.960 "strip_size_kb": 0, 00:10:06.960 "state": "online", 00:10:06.960 "raid_level": "raid1", 00:10:06.960 "superblock": true, 00:10:06.960 "num_base_bdevs": 2, 00:10:06.960 "num_base_bdevs_discovered": 1, 00:10:06.960 "num_base_bdevs_operational": 1, 00:10:06.960 "base_bdevs_list": [ 00:10:06.960 { 00:10:06.960 "name": null, 00:10:06.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.960 "is_configured": false, 00:10:06.960 "data_offset": 0, 00:10:06.960 "data_size": 63488 00:10:06.960 }, 00:10:06.960 { 00:10:06.960 "name": "BaseBdev2", 00:10:06.960 "uuid": "75963851-f687-4efc-8150-d3a3ae1106de", 00:10:06.960 "is_configured": true, 00:10:06.960 "data_offset": 2048, 00:10:06.960 "data_size": 63488 00:10:06.960 } 00:10:06.960 ] 00:10:06.960 }' 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.960 08:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.528 [2024-11-20 08:43:38.256374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.528 [2024-11-20 08:43:38.256529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.528 [2024-11-20 08:43:38.342694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.528 [2024-11-20 08:43:38.342781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.528 [2024-11-20 08:43:38.342803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62919 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62919 ']' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62919 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62919 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.528 killing process with pid 62919 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62919' 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62919 00:10:07.528 [2024-11-20 08:43:38.438504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.528 08:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62919 00:10:07.787 [2024-11-20 08:43:38.453407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.724 08:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.724 00:10:08.724 real 0m5.499s 00:10:08.724 user 0m8.281s 00:10:08.724 sys 0m0.785s 00:10:08.724 08:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.724 08:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.724 ************************************ 00:10:08.724 END TEST raid_state_function_test_sb 00:10:08.724 ************************************ 00:10:08.724 08:43:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:08.724 08:43:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.724 08:43:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.724 08:43:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.724 ************************************ 00:10:08.724 START TEST raid_superblock_test 00:10:08.724 ************************************ 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63171 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63171 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63171 ']' 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.724 08:43:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.982 [2024-11-20 08:43:39.660436] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:08.982 [2024-11-20 08:43:39.660589] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:10:08.982 [2024-11-20 08:43:39.842527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.242 [2024-11-20 08:43:39.997969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.501 [2024-11-20 08:43:40.223036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.501 [2024-11-20 08:43:40.223087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.068 malloc1 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.068 [2024-11-20 08:43:40.828282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.068 [2024-11-20 08:43:40.828512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.068 [2024-11-20 08:43:40.828556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.068 [2024-11-20 08:43:40.828572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.068 [2024-11-20 08:43:40.831474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.068 [2024-11-20 08:43:40.831656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.068 pt1 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.068 malloc2 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.068 [2024-11-20 08:43:40.885658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.068 [2024-11-20 08:43:40.885727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.068 [2024-11-20 08:43:40.885758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:10.068 [2024-11-20 08:43:40.885773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.068 [2024-11-20 08:43:40.888583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.068 [2024-11-20 08:43:40.888627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.068 pt2 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.068 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.069 [2024-11-20 08:43:40.893716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.069 [2024-11-20 08:43:40.896455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.069 [2024-11-20 08:43:40.896841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:10.069 [2024-11-20 08:43:40.896978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.069 [2024-11-20 08:43:40.897335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.069 [2024-11-20 08:43:40.897654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:10.069 [2024-11-20 08:43:40.897688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:10.069 [2024-11-20 08:43:40.897916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.069 "name": "raid_bdev1", 00:10:10.069 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:10.069 "strip_size_kb": 0, 00:10:10.069 "state": "online", 00:10:10.069 "raid_level": "raid1", 00:10:10.069 "superblock": true, 00:10:10.069 "num_base_bdevs": 2, 00:10:10.069 "num_base_bdevs_discovered": 2, 00:10:10.069 "num_base_bdevs_operational": 2, 00:10:10.069 "base_bdevs_list": [ 00:10:10.069 { 00:10:10.069 "name": "pt1", 00:10:10.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.069 "is_configured": true, 00:10:10.069 "data_offset": 2048, 00:10:10.069 "data_size": 63488 00:10:10.069 }, 00:10:10.069 { 00:10:10.069 "name": "pt2", 00:10:10.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.069 "is_configured": true, 00:10:10.069 "data_offset": 2048, 00:10:10.069 "data_size": 63488 00:10:10.069 } 00:10:10.069 ] 00:10:10.069 }' 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.069 08:43:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.710 [2024-11-20 08:43:41.426461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.710 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.710 "name": "raid_bdev1", 00:10:10.710 "aliases": [ 00:10:10.711 "d75d85ef-fa86-4e9c-9212-766ab3df1f09" 00:10:10.711 ], 00:10:10.711 "product_name": "Raid Volume", 00:10:10.711 "block_size": 512, 00:10:10.711 "num_blocks": 63488, 00:10:10.711 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:10.711 "assigned_rate_limits": { 00:10:10.711 "rw_ios_per_sec": 0, 00:10:10.711 "rw_mbytes_per_sec": 0, 00:10:10.711 "r_mbytes_per_sec": 0, 00:10:10.711 "w_mbytes_per_sec": 0 00:10:10.711 }, 00:10:10.711 "claimed": false, 00:10:10.711 "zoned": false, 00:10:10.711 "supported_io_types": { 00:10:10.711 "read": true, 00:10:10.711 "write": true, 00:10:10.711 "unmap": false, 00:10:10.711 "flush": false, 00:10:10.711 "reset": true, 00:10:10.711 "nvme_admin": false, 00:10:10.711 "nvme_io": false, 00:10:10.711 "nvme_io_md": false, 00:10:10.711 "write_zeroes": true, 00:10:10.711 "zcopy": false, 00:10:10.711 "get_zone_info": false, 00:10:10.711 "zone_management": false, 00:10:10.711 "zone_append": false, 00:10:10.711 "compare": false, 00:10:10.711 "compare_and_write": false, 00:10:10.711 "abort": false, 00:10:10.711 "seek_hole": false, 00:10:10.711 "seek_data": false, 00:10:10.711 "copy": false, 00:10:10.711 "nvme_iov_md": false 00:10:10.711 }, 00:10:10.711 "memory_domains": [ 00:10:10.711 { 00:10:10.711 "dma_device_id": "system", 00:10:10.711 "dma_device_type": 1 00:10:10.711 }, 00:10:10.711 { 00:10:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.711 "dma_device_type": 2 00:10:10.711 }, 00:10:10.711 { 00:10:10.711 "dma_device_id": "system", 00:10:10.711 "dma_device_type": 1 00:10:10.711 }, 00:10:10.711 { 00:10:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.711 "dma_device_type": 2 00:10:10.711 } 00:10:10.711 ], 00:10:10.711 "driver_specific": { 00:10:10.711 "raid": { 00:10:10.711 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:10.711 "strip_size_kb": 0, 00:10:10.711 "state": "online", 00:10:10.711 "raid_level": "raid1", 00:10:10.711 "superblock": true, 00:10:10.711 "num_base_bdevs": 2, 00:10:10.711 "num_base_bdevs_discovered": 2, 00:10:10.711 "num_base_bdevs_operational": 2, 00:10:10.711 "base_bdevs_list": [ 00:10:10.711 { 00:10:10.711 "name": "pt1", 00:10:10.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.711 "is_configured": true, 00:10:10.711 "data_offset": 2048, 00:10:10.711 "data_size": 63488 00:10:10.711 }, 00:10:10.711 { 00:10:10.711 "name": "pt2", 00:10:10.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.711 "is_configured": true, 00:10:10.711 "data_offset": 2048, 00:10:10.711 "data_size": 63488 00:10:10.711 } 00:10:10.711 ] 00:10:10.711 } 00:10:10.711 } 00:10:10.711 }' 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.711 pt2' 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.711 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 [2024-11-20 08:43:41.702460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d75d85ef-fa86-4e9c-9212-766ab3df1f09 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d75d85ef-fa86-4e9c-9212-766ab3df1f09 ']' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 [2024-11-20 08:43:41.762106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.970 [2024-11-20 08:43:41.762261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.970 [2024-11-20 08:43:41.762378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.970 [2024-11-20 08:43:41.762478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.970 [2024-11-20 08:43:41.762502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:10.970 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.230 [2024-11-20 08:43:41.902221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.230 [2024-11-20 08:43:41.904824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.230 [2024-11-20 08:43:41.905035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.230 [2024-11-20 08:43:41.905124] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.230 [2024-11-20 08:43:41.905177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.230 [2024-11-20 08:43:41.905205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:11.230 request: 00:10:11.230 { 00:10:11.230 "name": "raid_bdev1", 00:10:11.230 "raid_level": "raid1", 00:10:11.230 "base_bdevs": [ 00:10:11.230 "malloc1", 00:10:11.230 "malloc2" 00:10:11.230 ], 00:10:11.230 "superblock": false, 00:10:11.230 "method": "bdev_raid_create", 00:10:11.230 "req_id": 1 00:10:11.230 } 00:10:11.230 Got JSON-RPC error response 00:10:11.230 response: 00:10:11.230 { 00:10:11.230 "code": -17, 00:10:11.230 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.230 } 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.230 [2024-11-20 08:43:41.970189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.230 [2024-11-20 08:43:41.970392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.230 [2024-11-20 08:43:41.970541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.230 [2024-11-20 08:43:41.970682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.230 [2024-11-20 08:43:41.973699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.230 [2024-11-20 08:43:41.973859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.230 [2024-11-20 08:43:41.974058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.230 [2024-11-20 08:43:41.974289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.230 pt1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.230 08:43:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.230 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.230 "name": "raid_bdev1", 00:10:11.230 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:11.230 "strip_size_kb": 0, 00:10:11.230 "state": "configuring", 00:10:11.230 "raid_level": "raid1", 00:10:11.230 "superblock": true, 00:10:11.230 "num_base_bdevs": 2, 00:10:11.230 "num_base_bdevs_discovered": 1, 00:10:11.230 "num_base_bdevs_operational": 2, 00:10:11.230 "base_bdevs_list": [ 00:10:11.230 { 00:10:11.230 "name": "pt1", 00:10:11.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.230 "is_configured": true, 00:10:11.230 "data_offset": 2048, 00:10:11.230 "data_size": 63488 00:10:11.230 }, 00:10:11.230 { 00:10:11.230 "name": null, 00:10:11.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.230 "is_configured": false, 00:10:11.230 "data_offset": 2048, 00:10:11.230 "data_size": 63488 00:10:11.230 } 00:10:11.230 ] 00:10:11.230 }' 00:10:11.230 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.230 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.799 [2024-11-20 08:43:42.522805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.799 [2024-11-20 08:43:42.523056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.799 [2024-11-20 08:43:42.523096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:11.799 [2024-11-20 08:43:42.523115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.799 [2024-11-20 08:43:42.523714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.799 [2024-11-20 08:43:42.523752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.799 [2024-11-20 08:43:42.523858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.799 [2024-11-20 08:43:42.523899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.799 [2024-11-20 08:43:42.524043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.799 [2024-11-20 08:43:42.524070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:11.799 [2024-11-20 08:43:42.524383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:11.799 [2024-11-20 08:43:42.524576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.799 [2024-11-20 08:43:42.524599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:11.799 [2024-11-20 08:43:42.524769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.799 pt2 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.799 "name": "raid_bdev1", 00:10:11.799 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:11.799 "strip_size_kb": 0, 00:10:11.799 "state": "online", 00:10:11.799 "raid_level": "raid1", 00:10:11.799 "superblock": true, 00:10:11.799 "num_base_bdevs": 2, 00:10:11.799 "num_base_bdevs_discovered": 2, 00:10:11.799 "num_base_bdevs_operational": 2, 00:10:11.799 "base_bdevs_list": [ 00:10:11.799 { 00:10:11.799 "name": "pt1", 00:10:11.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.799 "is_configured": true, 00:10:11.799 "data_offset": 2048, 00:10:11.799 "data_size": 63488 00:10:11.799 }, 00:10:11.799 { 00:10:11.799 "name": "pt2", 00:10:11.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.799 "is_configured": true, 00:10:11.799 "data_offset": 2048, 00:10:11.799 "data_size": 63488 00:10:11.799 } 00:10:11.799 ] 00:10:11.799 }' 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.799 08:43:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.366 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 [2024-11-20 08:43:43.051347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.367 "name": "raid_bdev1", 00:10:12.367 "aliases": [ 00:10:12.367 "d75d85ef-fa86-4e9c-9212-766ab3df1f09" 00:10:12.367 ], 00:10:12.367 "product_name": "Raid Volume", 00:10:12.367 "block_size": 512, 00:10:12.367 "num_blocks": 63488, 00:10:12.367 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:12.367 "assigned_rate_limits": { 00:10:12.367 "rw_ios_per_sec": 0, 00:10:12.367 "rw_mbytes_per_sec": 0, 00:10:12.367 "r_mbytes_per_sec": 0, 00:10:12.367 "w_mbytes_per_sec": 0 00:10:12.367 }, 00:10:12.367 "claimed": false, 00:10:12.367 "zoned": false, 00:10:12.367 "supported_io_types": { 00:10:12.367 "read": true, 00:10:12.367 "write": true, 00:10:12.367 "unmap": false, 00:10:12.367 "flush": false, 00:10:12.367 "reset": true, 00:10:12.367 "nvme_admin": false, 00:10:12.367 "nvme_io": false, 00:10:12.367 "nvme_io_md": false, 00:10:12.367 "write_zeroes": true, 00:10:12.367 "zcopy": false, 00:10:12.367 "get_zone_info": false, 00:10:12.367 "zone_management": false, 00:10:12.367 "zone_append": false, 00:10:12.367 "compare": false, 00:10:12.367 "compare_and_write": false, 00:10:12.367 "abort": false, 00:10:12.367 "seek_hole": false, 00:10:12.367 "seek_data": false, 00:10:12.367 "copy": false, 00:10:12.367 "nvme_iov_md": false 00:10:12.367 }, 00:10:12.367 "memory_domains": [ 00:10:12.367 { 00:10:12.367 "dma_device_id": "system", 00:10:12.367 "dma_device_type": 1 00:10:12.367 }, 00:10:12.367 { 00:10:12.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.367 "dma_device_type": 2 00:10:12.367 }, 00:10:12.367 { 00:10:12.367 "dma_device_id": "system", 00:10:12.367 "dma_device_type": 1 00:10:12.367 }, 00:10:12.367 { 00:10:12.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.367 "dma_device_type": 2 00:10:12.367 } 00:10:12.367 ], 00:10:12.367 "driver_specific": { 00:10:12.367 "raid": { 00:10:12.367 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:12.367 "strip_size_kb": 0, 00:10:12.367 "state": "online", 00:10:12.367 "raid_level": "raid1", 00:10:12.367 "superblock": true, 00:10:12.367 "num_base_bdevs": 2, 00:10:12.367 "num_base_bdevs_discovered": 2, 00:10:12.367 "num_base_bdevs_operational": 2, 00:10:12.367 "base_bdevs_list": [ 00:10:12.367 { 00:10:12.367 "name": "pt1", 00:10:12.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.367 "is_configured": true, 00:10:12.367 "data_offset": 2048, 00:10:12.367 "data_size": 63488 00:10:12.367 }, 00:10:12.367 { 00:10:12.367 "name": "pt2", 00:10:12.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.367 "is_configured": true, 00:10:12.367 "data_offset": 2048, 00:10:12.367 "data_size": 63488 00:10:12.367 } 00:10:12.367 ] 00:10:12.367 } 00:10:12.367 } 00:10:12.367 }' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.367 pt2' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.367 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.626 [2024-11-20 08:43:43.315394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d75d85ef-fa86-4e9c-9212-766ab3df1f09 '!=' d75d85ef-fa86-4e9c-9212-766ab3df1f09 ']' 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.626 [2024-11-20 08:43:43.363179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.626 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.627 "name": "raid_bdev1", 00:10:12.627 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:12.627 "strip_size_kb": 0, 00:10:12.627 "state": "online", 00:10:12.627 "raid_level": "raid1", 00:10:12.627 "superblock": true, 00:10:12.627 "num_base_bdevs": 2, 00:10:12.627 "num_base_bdevs_discovered": 1, 00:10:12.627 "num_base_bdevs_operational": 1, 00:10:12.627 "base_bdevs_list": [ 00:10:12.627 { 00:10:12.627 "name": null, 00:10:12.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.627 "is_configured": false, 00:10:12.627 "data_offset": 0, 00:10:12.627 "data_size": 63488 00:10:12.627 }, 00:10:12.627 { 00:10:12.627 "name": "pt2", 00:10:12.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.627 "is_configured": true, 00:10:12.627 "data_offset": 2048, 00:10:12.627 "data_size": 63488 00:10:12.627 } 00:10:12.627 ] 00:10:12.627 }' 00:10:12.627 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.627 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.194 [2024-11-20 08:43:43.887252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.194 [2024-11-20 08:43:43.887287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.194 [2024-11-20 08:43:43.887381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.194 [2024-11-20 08:43:43.887443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.194 [2024-11-20 08:43:43.887462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.194 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.194 [2024-11-20 08:43:43.959219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.194 [2024-11-20 08:43:43.959294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.194 [2024-11-20 08:43:43.959321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:13.194 [2024-11-20 08:43:43.959339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.194 [2024-11-20 08:43:43.962225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.195 [2024-11-20 08:43:43.962275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.195 [2024-11-20 08:43:43.962373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:13.195 [2024-11-20 08:43:43.962442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.195 [2024-11-20 08:43:43.962570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.195 [2024-11-20 08:43:43.962593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.195 [2024-11-20 08:43:43.962873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.195 [2024-11-20 08:43:43.963077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.195 [2024-11-20 08:43:43.963094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:13.195 [2024-11-20 08:43:43.963339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.195 pt2 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.195 08:43:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.195 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.195 "name": "raid_bdev1", 00:10:13.195 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:13.195 "strip_size_kb": 0, 00:10:13.195 "state": "online", 00:10:13.195 "raid_level": "raid1", 00:10:13.195 "superblock": true, 00:10:13.195 "num_base_bdevs": 2, 00:10:13.195 "num_base_bdevs_discovered": 1, 00:10:13.195 "num_base_bdevs_operational": 1, 00:10:13.195 "base_bdevs_list": [ 00:10:13.195 { 00:10:13.195 "name": null, 00:10:13.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.195 "is_configured": false, 00:10:13.195 "data_offset": 2048, 00:10:13.195 "data_size": 63488 00:10:13.195 }, 00:10:13.195 { 00:10:13.195 "name": "pt2", 00:10:13.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.195 "is_configured": true, 00:10:13.195 "data_offset": 2048, 00:10:13.195 "data_size": 63488 00:10:13.195 } 00:10:13.195 ] 00:10:13.195 }' 00:10:13.195 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.195 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 [2024-11-20 08:43:44.479373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.762 [2024-11-20 08:43:44.479409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.762 [2024-11-20 08:43:44.479495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.762 [2024-11-20 08:43:44.479577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.762 [2024-11-20 08:43:44.479594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 [2024-11-20 08:43:44.543394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.762 [2024-11-20 08:43:44.543460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.762 [2024-11-20 08:43:44.543491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:13.762 [2024-11-20 08:43:44.543505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.762 [2024-11-20 08:43:44.546599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.762 [2024-11-20 08:43:44.546653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.762 [2024-11-20 08:43:44.546753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.762 [2024-11-20 08:43:44.546809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.762 [2024-11-20 08:43:44.546988] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:13.762 [2024-11-20 08:43:44.547007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.762 [2024-11-20 08:43:44.547028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:13.762 [2024-11-20 08:43:44.547110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.762 [2024-11-20 08:43:44.547239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:13.762 [2024-11-20 08:43:44.547255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.762 [2024-11-20 08:43:44.547576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:13.762 [2024-11-20 08:43:44.547764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:13.762 [2024-11-20 08:43:44.547785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:13.762 [2024-11-20 08:43:44.548012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.762 pt1 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.762 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.763 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.763 "name": "raid_bdev1", 00:10:13.763 "uuid": "d75d85ef-fa86-4e9c-9212-766ab3df1f09", 00:10:13.763 "strip_size_kb": 0, 00:10:13.763 "state": "online", 00:10:13.763 "raid_level": "raid1", 00:10:13.763 "superblock": true, 00:10:13.763 "num_base_bdevs": 2, 00:10:13.763 "num_base_bdevs_discovered": 1, 00:10:13.763 "num_base_bdevs_operational": 1, 00:10:13.763 "base_bdevs_list": [ 00:10:13.763 { 00:10:13.763 "name": null, 00:10:13.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.763 "is_configured": false, 00:10:13.763 "data_offset": 2048, 00:10:13.763 "data_size": 63488 00:10:13.763 }, 00:10:13.763 { 00:10:13.763 "name": "pt2", 00:10:13.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.763 "is_configured": true, 00:10:13.763 "data_offset": 2048, 00:10:13.763 "data_size": 63488 00:10:13.763 } 00:10:13.763 ] 00:10:13.763 }' 00:10:13.763 08:43:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.763 08:43:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.330 [2024-11-20 08:43:45.116369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d75d85ef-fa86-4e9c-9212-766ab3df1f09 '!=' d75d85ef-fa86-4e9c-9212-766ab3df1f09 ']' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63171 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63171 ']' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63171 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63171 00:10:14.330 killing process with pid 63171 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63171' 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63171 00:10:14.330 [2024-11-20 08:43:45.194440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.330 08:43:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63171 00:10:14.330 [2024-11-20 08:43:45.194543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.330 [2024-11-20 08:43:45.194604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.331 [2024-11-20 08:43:45.194625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:14.590 [2024-11-20 08:43:45.380165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.591 ************************************ 00:10:15.591 END TEST raid_superblock_test 00:10:15.591 ************************************ 00:10:15.591 08:43:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:15.591 00:10:15.591 real 0m6.858s 00:10:15.591 user 0m10.907s 00:10:15.591 sys 0m0.974s 00:10:15.591 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.591 08:43:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.591 08:43:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:15.591 08:43:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.591 08:43:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.591 08:43:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.591 ************************************ 00:10:15.591 START TEST raid_read_error_test 00:10:15.591 ************************************ 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ia6ch02Ph9 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63512 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63512 00:10:15.591 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63512 ']' 00:10:15.592 08:43:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.592 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.592 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.592 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.592 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.592 08:43:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.850 [2024-11-20 08:43:46.602261] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:15.850 [2024-11-20 08:43:46.602457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63512 ] 00:10:16.108 [2024-11-20 08:43:46.794668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.109 [2024-11-20 08:43:46.953032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.367 [2024-11-20 08:43:47.215192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.367 [2024-11-20 08:43:47.215276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 BaseBdev1_malloc 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 true 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 [2024-11-20 08:43:47.680566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.934 [2024-11-20 08:43:47.680648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.934 [2024-11-20 08:43:47.680687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.934 [2024-11-20 08:43:47.680705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.934 [2024-11-20 08:43:47.683588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.934 [2024-11-20 08:43:47.683641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.934 BaseBdev1 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 BaseBdev2_malloc 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 true 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 [2024-11-20 08:43:47.741360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.934 [2024-11-20 08:43:47.741437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.934 [2024-11-20 08:43:47.741463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.934 [2024-11-20 08:43:47.741481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.934 [2024-11-20 08:43:47.744320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.934 [2024-11-20 08:43:47.744374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.934 BaseBdev2 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 [2024-11-20 08:43:47.749431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.934 [2024-11-20 08:43:47.751893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.934 [2024-11-20 08:43:47.752188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.934 [2024-11-20 08:43:47.752222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.934 [2024-11-20 08:43:47.752539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:16.934 [2024-11-20 08:43:47.752789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.934 [2024-11-20 08:43:47.752817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:16.934 [2024-11-20 08:43:47.753022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.934 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.935 "name": "raid_bdev1", 00:10:16.935 "uuid": "9a158416-e406-4d76-8491-578c1d38cb24", 00:10:16.935 "strip_size_kb": 0, 00:10:16.935 "state": "online", 00:10:16.935 "raid_level": "raid1", 00:10:16.935 "superblock": true, 00:10:16.935 "num_base_bdevs": 2, 00:10:16.935 "num_base_bdevs_discovered": 2, 00:10:16.935 "num_base_bdevs_operational": 2, 00:10:16.935 "base_bdevs_list": [ 00:10:16.935 { 00:10:16.935 "name": "BaseBdev1", 00:10:16.935 "uuid": "fbe7255d-bade-5642-ad83-019a356e1df3", 00:10:16.935 "is_configured": true, 00:10:16.935 "data_offset": 2048, 00:10:16.935 "data_size": 63488 00:10:16.935 }, 00:10:16.935 { 00:10:16.935 "name": "BaseBdev2", 00:10:16.935 "uuid": "c4862e47-12d2-5a38-a5c3-75869707f623", 00:10:16.935 "is_configured": true, 00:10:16.935 "data_offset": 2048, 00:10:16.935 "data_size": 63488 00:10:16.935 } 00:10:16.935 ] 00:10:16.935 }' 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.935 08:43:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.501 08:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.501 08:43:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.501 [2024-11-20 08:43:48.343023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.434 "name": "raid_bdev1", 00:10:18.434 "uuid": "9a158416-e406-4d76-8491-578c1d38cb24", 00:10:18.434 "strip_size_kb": 0, 00:10:18.434 "state": "online", 00:10:18.434 "raid_level": "raid1", 00:10:18.434 "superblock": true, 00:10:18.434 "num_base_bdevs": 2, 00:10:18.434 "num_base_bdevs_discovered": 2, 00:10:18.434 "num_base_bdevs_operational": 2, 00:10:18.434 "base_bdevs_list": [ 00:10:18.434 { 00:10:18.434 "name": "BaseBdev1", 00:10:18.434 "uuid": "fbe7255d-bade-5642-ad83-019a356e1df3", 00:10:18.434 "is_configured": true, 00:10:18.434 "data_offset": 2048, 00:10:18.434 "data_size": 63488 00:10:18.434 }, 00:10:18.434 { 00:10:18.434 "name": "BaseBdev2", 00:10:18.434 "uuid": "c4862e47-12d2-5a38-a5c3-75869707f623", 00:10:18.434 "is_configured": true, 00:10:18.434 "data_offset": 2048, 00:10:18.434 "data_size": 63488 00:10:18.434 } 00:10:18.434 ] 00:10:18.434 }' 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.434 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.043 [2024-11-20 08:43:49.794613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.043 [2024-11-20 08:43:49.794659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.043 [2024-11-20 08:43:49.797969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.043 [2024-11-20 08:43:49.798037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.043 [2024-11-20 08:43:49.798161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.043 [2024-11-20 08:43:49.798184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:19.043 { 00:10:19.043 "results": [ 00:10:19.043 { 00:10:19.043 "job": "raid_bdev1", 00:10:19.043 "core_mask": "0x1", 00:10:19.043 "workload": "randrw", 00:10:19.043 "percentage": 50, 00:10:19.043 "status": "finished", 00:10:19.043 "queue_depth": 1, 00:10:19.043 "io_size": 131072, 00:10:19.043 "runtime": 1.448989, 00:10:19.043 "iops": 11958.682916157404, 00:10:19.043 "mibps": 1494.8353645196755, 00:10:19.043 "io_failed": 0, 00:10:19.043 "io_timeout": 0, 00:10:19.043 "avg_latency_us": 79.29126836229328, 00:10:19.043 "min_latency_us": 40.261818181818185, 00:10:19.043 "max_latency_us": 2129.92 00:10:19.043 } 00:10:19.043 ], 00:10:19.043 "core_count": 1 00:10:19.043 } 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63512 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63512 ']' 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63512 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63512 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.043 killing process with pid 63512 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63512' 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63512 00:10:19.043 [2024-11-20 08:43:49.835664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.043 08:43:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63512 00:10:19.304 [2024-11-20 08:43:49.956174] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ia6ch02Ph9 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:20.241 00:10:20.241 real 0m4.602s 00:10:20.241 user 0m5.763s 00:10:20.241 sys 0m0.566s 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.241 08:43:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.241 ************************************ 00:10:20.241 END TEST raid_read_error_test 00:10:20.241 ************************************ 00:10:20.241 08:43:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:20.241 08:43:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.241 08:43:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.241 08:43:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.241 ************************************ 00:10:20.241 START TEST raid_write_error_test 00:10:20.241 ************************************ 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7kMScONGEI 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63652 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63652 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63652 ']' 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.241 08:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.500 [2024-11-20 08:43:51.257707] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:20.500 [2024-11-20 08:43:51.257888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63652 ] 00:10:20.758 [2024-11-20 08:43:51.442737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.758 [2024-11-20 08:43:51.574647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.016 [2024-11-20 08:43:51.783055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.016 [2024-11-20 08:43:51.783112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.583 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.583 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:21.583 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.583 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.583 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 BaseBdev1_malloc 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 true 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 [2024-11-20 08:43:52.343631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.584 [2024-11-20 08:43:52.343720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.584 [2024-11-20 08:43:52.343751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.584 [2024-11-20 08:43:52.343769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.584 [2024-11-20 08:43:52.346714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.584 [2024-11-20 08:43:52.346803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.584 BaseBdev1 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 BaseBdev2_malloc 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 true 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 [2024-11-20 08:43:52.407270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.584 [2024-11-20 08:43:52.407358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.584 [2024-11-20 08:43:52.407385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.584 [2024-11-20 08:43:52.407403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.584 [2024-11-20 08:43:52.410253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.584 [2024-11-20 08:43:52.410334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.584 BaseBdev2 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 [2024-11-20 08:43:52.415323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.584 [2024-11-20 08:43:52.417865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.584 [2024-11-20 08:43:52.418142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.584 [2024-11-20 08:43:52.418219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.584 [2024-11-20 08:43:52.418549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:21.584 [2024-11-20 08:43:52.418830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.584 [2024-11-20 08:43:52.418858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:21.584 [2024-11-20 08:43:52.419050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.584 "name": "raid_bdev1", 00:10:21.584 "uuid": "9f1b183b-364a-42e7-a4b7-fd10efc335fb", 00:10:21.584 "strip_size_kb": 0, 00:10:21.584 "state": "online", 00:10:21.584 "raid_level": "raid1", 00:10:21.584 "superblock": true, 00:10:21.584 "num_base_bdevs": 2, 00:10:21.584 "num_base_bdevs_discovered": 2, 00:10:21.584 "num_base_bdevs_operational": 2, 00:10:21.584 "base_bdevs_list": [ 00:10:21.584 { 00:10:21.584 "name": "BaseBdev1", 00:10:21.584 "uuid": "73cc0dd3-281c-5e3c-bbf9-0ba8078b0649", 00:10:21.584 "is_configured": true, 00:10:21.584 "data_offset": 2048, 00:10:21.584 "data_size": 63488 00:10:21.584 }, 00:10:21.584 { 00:10:21.584 "name": "BaseBdev2", 00:10:21.584 "uuid": "675265f4-6663-5105-8c3a-5ba2fcd20ad7", 00:10:21.584 "is_configured": true, 00:10:21.584 "data_offset": 2048, 00:10:21.584 "data_size": 63488 00:10:21.584 } 00:10:21.584 ] 00:10:21.584 }' 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.584 08:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.152 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.152 08:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.410 [2024-11-20 08:43:53.097025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.345 [2024-11-20 08:43:53.970982] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:23.345 [2024-11-20 08:43:53.971060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.345 [2024-11-20 08:43:53.971298] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.345 08:43:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.345 08:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.345 "name": "raid_bdev1", 00:10:23.345 "uuid": "9f1b183b-364a-42e7-a4b7-fd10efc335fb", 00:10:23.345 "strip_size_kb": 0, 00:10:23.345 "state": "online", 00:10:23.345 "raid_level": "raid1", 00:10:23.345 "superblock": true, 00:10:23.345 "num_base_bdevs": 2, 00:10:23.345 "num_base_bdevs_discovered": 1, 00:10:23.345 "num_base_bdevs_operational": 1, 00:10:23.345 "base_bdevs_list": [ 00:10:23.345 { 00:10:23.346 "name": null, 00:10:23.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.346 "is_configured": false, 00:10:23.346 "data_offset": 0, 00:10:23.346 "data_size": 63488 00:10:23.346 }, 00:10:23.346 { 00:10:23.346 "name": "BaseBdev2", 00:10:23.346 "uuid": "675265f4-6663-5105-8c3a-5ba2fcd20ad7", 00:10:23.346 "is_configured": true, 00:10:23.346 "data_offset": 2048, 00:10:23.346 "data_size": 63488 00:10:23.346 } 00:10:23.346 ] 00:10:23.346 }' 00:10:23.346 08:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.346 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 08:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.604 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.604 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.867 [2024-11-20 08:43:54.522683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.867 [2024-11-20 08:43:54.522729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.868 [2024-11-20 08:43:54.526061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.868 [2024-11-20 08:43:54.526122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.868 [2024-11-20 08:43:54.526227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.868 [2024-11-20 08:43:54.526245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:23.868 { 00:10:23.868 "results": [ 00:10:23.868 { 00:10:23.868 "job": "raid_bdev1", 00:10:23.868 "core_mask": "0x1", 00:10:23.868 "workload": "randrw", 00:10:23.868 "percentage": 50, 00:10:23.868 "status": "finished", 00:10:23.868 "queue_depth": 1, 00:10:23.868 "io_size": 131072, 00:10:23.868 "runtime": 1.422996, 00:10:23.868 "iops": 13500.389319435895, 00:10:23.868 "mibps": 1687.548664929487, 00:10:23.868 "io_failed": 0, 00:10:23.868 "io_timeout": 0, 00:10:23.868 "avg_latency_us": 69.74835515637348, 00:10:23.868 "min_latency_us": 40.49454545454545, 00:10:23.868 "max_latency_us": 2144.8145454545456 00:10:23.868 } 00:10:23.868 ], 00:10:23.868 "core_count": 1 00:10:23.868 } 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63652 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63652 ']' 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63652 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63652 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63652' 00:10:23.868 killing process with pid 63652 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63652 00:10:23.868 [2024-11-20 08:43:54.564495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.868 08:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63652 00:10:23.868 [2024-11-20 08:43:54.690114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7kMScONGEI 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.247 00:10:25.247 real 0m4.640s 00:10:25.247 user 0m5.903s 00:10:25.247 sys 0m0.562s 00:10:25.247 ************************************ 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.247 08:43:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.247 END TEST raid_write_error_test 00:10:25.247 ************************************ 00:10:25.247 08:43:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:25.247 08:43:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:25.247 08:43:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:25.247 08:43:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.247 08:43:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.247 08:43:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.247 ************************************ 00:10:25.247 START TEST raid_state_function_test 00:10:25.247 ************************************ 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63800 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63800' 00:10:25.247 Process raid pid: 63800 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63800 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63800 ']' 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.247 08:43:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.247 [2024-11-20 08:43:55.938972] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:25.247 [2024-11-20 08:43:55.939188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.247 [2024-11-20 08:43:56.131359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.506 [2024-11-20 08:43:56.286232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.765 [2024-11-20 08:43:56.505975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.765 [2024-11-20 08:43:56.506034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.024 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.024 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.024 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.391 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.392 [2024-11-20 08:43:56.942670] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.392 [2024-11-20 08:43:56.942736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.392 [2024-11-20 08:43:56.942753] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.392 [2024-11-20 08:43:56.942770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.392 [2024-11-20 08:43:56.942780] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.392 [2024-11-20 08:43:56.942794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.392 "name": "Existed_Raid", 00:10:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.392 "strip_size_kb": 64, 00:10:26.392 "state": "configuring", 00:10:26.392 "raid_level": "raid0", 00:10:26.392 "superblock": false, 00:10:26.392 "num_base_bdevs": 3, 00:10:26.392 "num_base_bdevs_discovered": 0, 00:10:26.392 "num_base_bdevs_operational": 3, 00:10:26.392 "base_bdevs_list": [ 00:10:26.392 { 00:10:26.392 "name": "BaseBdev1", 00:10:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.392 "is_configured": false, 00:10:26.392 "data_offset": 0, 00:10:26.392 "data_size": 0 00:10:26.392 }, 00:10:26.392 { 00:10:26.392 "name": "BaseBdev2", 00:10:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.392 "is_configured": false, 00:10:26.392 "data_offset": 0, 00:10:26.392 "data_size": 0 00:10:26.392 }, 00:10:26.392 { 00:10:26.392 "name": "BaseBdev3", 00:10:26.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.392 "is_configured": false, 00:10:26.392 "data_offset": 0, 00:10:26.392 "data_size": 0 00:10:26.392 } 00:10:26.392 ] 00:10:26.392 }' 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.392 08:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 [2024-11-20 08:43:57.434824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.652 [2024-11-20 08:43:57.434873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 [2024-11-20 08:43:57.442811] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.652 [2024-11-20 08:43:57.442870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.652 [2024-11-20 08:43:57.442885] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.652 [2024-11-20 08:43:57.442901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.652 [2024-11-20 08:43:57.442910] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.652 [2024-11-20 08:43:57.442924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 [2024-11-20 08:43:57.487671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.652 BaseBdev1 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.652 [ 00:10:26.652 { 00:10:26.652 "name": "BaseBdev1", 00:10:26.652 "aliases": [ 00:10:26.652 "b8103124-808a-499f-806b-f5262cf4ea31" 00:10:26.652 ], 00:10:26.652 "product_name": "Malloc disk", 00:10:26.652 "block_size": 512, 00:10:26.652 "num_blocks": 65536, 00:10:26.652 "uuid": "b8103124-808a-499f-806b-f5262cf4ea31", 00:10:26.652 "assigned_rate_limits": { 00:10:26.652 "rw_ios_per_sec": 0, 00:10:26.652 "rw_mbytes_per_sec": 0, 00:10:26.652 "r_mbytes_per_sec": 0, 00:10:26.652 "w_mbytes_per_sec": 0 00:10:26.652 }, 00:10:26.652 "claimed": true, 00:10:26.652 "claim_type": "exclusive_write", 00:10:26.652 "zoned": false, 00:10:26.652 "supported_io_types": { 00:10:26.652 "read": true, 00:10:26.652 "write": true, 00:10:26.652 "unmap": true, 00:10:26.652 "flush": true, 00:10:26.652 "reset": true, 00:10:26.652 "nvme_admin": false, 00:10:26.652 "nvme_io": false, 00:10:26.652 "nvme_io_md": false, 00:10:26.652 "write_zeroes": true, 00:10:26.652 "zcopy": true, 00:10:26.652 "get_zone_info": false, 00:10:26.652 "zone_management": false, 00:10:26.652 "zone_append": false, 00:10:26.652 "compare": false, 00:10:26.652 "compare_and_write": false, 00:10:26.652 "abort": true, 00:10:26.652 "seek_hole": false, 00:10:26.652 "seek_data": false, 00:10:26.652 "copy": true, 00:10:26.652 "nvme_iov_md": false 00:10:26.652 }, 00:10:26.652 "memory_domains": [ 00:10:26.652 { 00:10:26.652 "dma_device_id": "system", 00:10:26.652 "dma_device_type": 1 00:10:26.652 }, 00:10:26.652 { 00:10:26.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.652 "dma_device_type": 2 00:10:26.652 } 00:10:26.652 ], 00:10:26.652 "driver_specific": {} 00:10:26.652 } 00:10:26.652 ] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.652 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.653 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.653 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.653 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.653 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.911 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.911 "name": "Existed_Raid", 00:10:26.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.911 "strip_size_kb": 64, 00:10:26.911 "state": "configuring", 00:10:26.911 "raid_level": "raid0", 00:10:26.911 "superblock": false, 00:10:26.911 "num_base_bdevs": 3, 00:10:26.911 "num_base_bdevs_discovered": 1, 00:10:26.911 "num_base_bdevs_operational": 3, 00:10:26.911 "base_bdevs_list": [ 00:10:26.911 { 00:10:26.911 "name": "BaseBdev1", 00:10:26.911 "uuid": "b8103124-808a-499f-806b-f5262cf4ea31", 00:10:26.911 "is_configured": true, 00:10:26.911 "data_offset": 0, 00:10:26.911 "data_size": 65536 00:10:26.911 }, 00:10:26.911 { 00:10:26.911 "name": "BaseBdev2", 00:10:26.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.911 "is_configured": false, 00:10:26.911 "data_offset": 0, 00:10:26.911 "data_size": 0 00:10:26.911 }, 00:10:26.911 { 00:10:26.911 "name": "BaseBdev3", 00:10:26.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.911 "is_configured": false, 00:10:26.911 "data_offset": 0, 00:10:26.911 "data_size": 0 00:10:26.911 } 00:10:26.911 ] 00:10:26.911 }' 00:10:26.911 08:43:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.911 08:43:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 [2024-11-20 08:43:58.031976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.170 [2024-11-20 08:43:58.032041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 [2024-11-20 08:43:58.040004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.170 [2024-11-20 08:43:58.042563] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.170 [2024-11-20 08:43:58.042615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.170 [2024-11-20 08:43:58.042631] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.170 [2024-11-20 08:43:58.042646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.170 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.427 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.427 "name": "Existed_Raid", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "strip_size_kb": 64, 00:10:27.428 "state": "configuring", 00:10:27.428 "raid_level": "raid0", 00:10:27.428 "superblock": false, 00:10:27.428 "num_base_bdevs": 3, 00:10:27.428 "num_base_bdevs_discovered": 1, 00:10:27.428 "num_base_bdevs_operational": 3, 00:10:27.428 "base_bdevs_list": [ 00:10:27.428 { 00:10:27.428 "name": "BaseBdev1", 00:10:27.428 "uuid": "b8103124-808a-499f-806b-f5262cf4ea31", 00:10:27.428 "is_configured": true, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 65536 00:10:27.428 }, 00:10:27.428 { 00:10:27.428 "name": "BaseBdev2", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "is_configured": false, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 0 00:10:27.428 }, 00:10:27.428 { 00:10:27.428 "name": "BaseBdev3", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "is_configured": false, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 0 00:10:27.428 } 00:10:27.428 ] 00:10:27.428 }' 00:10:27.428 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.428 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.685 [2024-11-20 08:43:58.590840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.685 BaseBdev2 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.685 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.942 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.942 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.942 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.942 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.942 [ 00:10:27.942 { 00:10:27.942 "name": "BaseBdev2", 00:10:27.942 "aliases": [ 00:10:27.942 "0908b2d6-dce4-4e03-a0b6-32d2916e4c68" 00:10:27.942 ], 00:10:27.942 "product_name": "Malloc disk", 00:10:27.942 "block_size": 512, 00:10:27.942 "num_blocks": 65536, 00:10:27.942 "uuid": "0908b2d6-dce4-4e03-a0b6-32d2916e4c68", 00:10:27.942 "assigned_rate_limits": { 00:10:27.942 "rw_ios_per_sec": 0, 00:10:27.942 "rw_mbytes_per_sec": 0, 00:10:27.942 "r_mbytes_per_sec": 0, 00:10:27.942 "w_mbytes_per_sec": 0 00:10:27.942 }, 00:10:27.942 "claimed": true, 00:10:27.942 "claim_type": "exclusive_write", 00:10:27.942 "zoned": false, 00:10:27.942 "supported_io_types": { 00:10:27.942 "read": true, 00:10:27.942 "write": true, 00:10:27.942 "unmap": true, 00:10:27.942 "flush": true, 00:10:27.942 "reset": true, 00:10:27.943 "nvme_admin": false, 00:10:27.943 "nvme_io": false, 00:10:27.943 "nvme_io_md": false, 00:10:27.943 "write_zeroes": true, 00:10:27.943 "zcopy": true, 00:10:27.943 "get_zone_info": false, 00:10:27.943 "zone_management": false, 00:10:27.943 "zone_append": false, 00:10:27.943 "compare": false, 00:10:27.943 "compare_and_write": false, 00:10:27.943 "abort": true, 00:10:27.943 "seek_hole": false, 00:10:27.943 "seek_data": false, 00:10:27.943 "copy": true, 00:10:27.943 "nvme_iov_md": false 00:10:27.943 }, 00:10:27.943 "memory_domains": [ 00:10:27.943 { 00:10:27.943 "dma_device_id": "system", 00:10:27.943 "dma_device_type": 1 00:10:27.943 }, 00:10:27.943 { 00:10:27.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.943 "dma_device_type": 2 00:10:27.943 } 00:10:27.943 ], 00:10:27.943 "driver_specific": {} 00:10:27.943 } 00:10:27.943 ] 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.943 "name": "Existed_Raid", 00:10:27.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.943 "strip_size_kb": 64, 00:10:27.943 "state": "configuring", 00:10:27.943 "raid_level": "raid0", 00:10:27.943 "superblock": false, 00:10:27.943 "num_base_bdevs": 3, 00:10:27.943 "num_base_bdevs_discovered": 2, 00:10:27.943 "num_base_bdevs_operational": 3, 00:10:27.943 "base_bdevs_list": [ 00:10:27.943 { 00:10:27.943 "name": "BaseBdev1", 00:10:27.943 "uuid": "b8103124-808a-499f-806b-f5262cf4ea31", 00:10:27.943 "is_configured": true, 00:10:27.943 "data_offset": 0, 00:10:27.943 "data_size": 65536 00:10:27.943 }, 00:10:27.943 { 00:10:27.943 "name": "BaseBdev2", 00:10:27.943 "uuid": "0908b2d6-dce4-4e03-a0b6-32d2916e4c68", 00:10:27.943 "is_configured": true, 00:10:27.943 "data_offset": 0, 00:10:27.943 "data_size": 65536 00:10:27.943 }, 00:10:27.943 { 00:10:27.943 "name": "BaseBdev3", 00:10:27.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.943 "is_configured": false, 00:10:27.943 "data_offset": 0, 00:10:27.943 "data_size": 0 00:10:27.943 } 00:10:27.943 ] 00:10:27.943 }' 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.943 08:43:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.201 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.201 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.201 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.459 [2024-11-20 08:43:59.138762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.459 [2024-11-20 08:43:59.138832] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.459 [2024-11-20 08:43:59.138853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:28.459 [2024-11-20 08:43:59.139255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:28.459 [2024-11-20 08:43:59.139471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.459 [2024-11-20 08:43:59.139487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:28.459 [2024-11-20 08:43:59.139828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.459 BaseBdev3 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.459 [ 00:10:28.459 { 00:10:28.459 "name": "BaseBdev3", 00:10:28.459 "aliases": [ 00:10:28.459 "fb645eed-196c-4fff-8f5b-4572eb43c661" 00:10:28.459 ], 00:10:28.459 "product_name": "Malloc disk", 00:10:28.459 "block_size": 512, 00:10:28.459 "num_blocks": 65536, 00:10:28.459 "uuid": "fb645eed-196c-4fff-8f5b-4572eb43c661", 00:10:28.459 "assigned_rate_limits": { 00:10:28.459 "rw_ios_per_sec": 0, 00:10:28.459 "rw_mbytes_per_sec": 0, 00:10:28.459 "r_mbytes_per_sec": 0, 00:10:28.459 "w_mbytes_per_sec": 0 00:10:28.459 }, 00:10:28.459 "claimed": true, 00:10:28.459 "claim_type": "exclusive_write", 00:10:28.459 "zoned": false, 00:10:28.459 "supported_io_types": { 00:10:28.459 "read": true, 00:10:28.459 "write": true, 00:10:28.459 "unmap": true, 00:10:28.459 "flush": true, 00:10:28.459 "reset": true, 00:10:28.459 "nvme_admin": false, 00:10:28.459 "nvme_io": false, 00:10:28.459 "nvme_io_md": false, 00:10:28.459 "write_zeroes": true, 00:10:28.459 "zcopy": true, 00:10:28.459 "get_zone_info": false, 00:10:28.459 "zone_management": false, 00:10:28.459 "zone_append": false, 00:10:28.459 "compare": false, 00:10:28.459 "compare_and_write": false, 00:10:28.459 "abort": true, 00:10:28.459 "seek_hole": false, 00:10:28.459 "seek_data": false, 00:10:28.459 "copy": true, 00:10:28.459 "nvme_iov_md": false 00:10:28.459 }, 00:10:28.459 "memory_domains": [ 00:10:28.459 { 00:10:28.459 "dma_device_id": "system", 00:10:28.459 "dma_device_type": 1 00:10:28.459 }, 00:10:28.459 { 00:10:28.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.459 "dma_device_type": 2 00:10:28.459 } 00:10:28.459 ], 00:10:28.459 "driver_specific": {} 00:10:28.459 } 00:10:28.459 ] 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.459 "name": "Existed_Raid", 00:10:28.459 "uuid": "47e85521-21b4-499f-b53e-77187a79bc62", 00:10:28.459 "strip_size_kb": 64, 00:10:28.459 "state": "online", 00:10:28.459 "raid_level": "raid0", 00:10:28.459 "superblock": false, 00:10:28.459 "num_base_bdevs": 3, 00:10:28.459 "num_base_bdevs_discovered": 3, 00:10:28.459 "num_base_bdevs_operational": 3, 00:10:28.459 "base_bdevs_list": [ 00:10:28.459 { 00:10:28.459 "name": "BaseBdev1", 00:10:28.459 "uuid": "b8103124-808a-499f-806b-f5262cf4ea31", 00:10:28.459 "is_configured": true, 00:10:28.459 "data_offset": 0, 00:10:28.459 "data_size": 65536 00:10:28.459 }, 00:10:28.459 { 00:10:28.459 "name": "BaseBdev2", 00:10:28.459 "uuid": "0908b2d6-dce4-4e03-a0b6-32d2916e4c68", 00:10:28.459 "is_configured": true, 00:10:28.459 "data_offset": 0, 00:10:28.459 "data_size": 65536 00:10:28.459 }, 00:10:28.459 { 00:10:28.459 "name": "BaseBdev3", 00:10:28.459 "uuid": "fb645eed-196c-4fff-8f5b-4572eb43c661", 00:10:28.459 "is_configured": true, 00:10:28.459 "data_offset": 0, 00:10:28.459 "data_size": 65536 00:10:28.459 } 00:10:28.459 ] 00:10:28.459 }' 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.459 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.025 [2024-11-20 08:43:59.651364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.025 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.025 "name": "Existed_Raid", 00:10:29.025 "aliases": [ 00:10:29.025 "47e85521-21b4-499f-b53e-77187a79bc62" 00:10:29.025 ], 00:10:29.025 "product_name": "Raid Volume", 00:10:29.025 "block_size": 512, 00:10:29.025 "num_blocks": 196608, 00:10:29.025 "uuid": "47e85521-21b4-499f-b53e-77187a79bc62", 00:10:29.025 "assigned_rate_limits": { 00:10:29.025 "rw_ios_per_sec": 0, 00:10:29.025 "rw_mbytes_per_sec": 0, 00:10:29.025 "r_mbytes_per_sec": 0, 00:10:29.025 "w_mbytes_per_sec": 0 00:10:29.025 }, 00:10:29.025 "claimed": false, 00:10:29.025 "zoned": false, 00:10:29.025 "supported_io_types": { 00:10:29.025 "read": true, 00:10:29.025 "write": true, 00:10:29.025 "unmap": true, 00:10:29.025 "flush": true, 00:10:29.025 "reset": true, 00:10:29.025 "nvme_admin": false, 00:10:29.025 "nvme_io": false, 00:10:29.025 "nvme_io_md": false, 00:10:29.025 "write_zeroes": true, 00:10:29.025 "zcopy": false, 00:10:29.025 "get_zone_info": false, 00:10:29.025 "zone_management": false, 00:10:29.025 "zone_append": false, 00:10:29.025 "compare": false, 00:10:29.025 "compare_and_write": false, 00:10:29.025 "abort": false, 00:10:29.025 "seek_hole": false, 00:10:29.025 "seek_data": false, 00:10:29.025 "copy": false, 00:10:29.025 "nvme_iov_md": false 00:10:29.025 }, 00:10:29.025 "memory_domains": [ 00:10:29.025 { 00:10:29.025 "dma_device_id": "system", 00:10:29.025 "dma_device_type": 1 00:10:29.025 }, 00:10:29.025 { 00:10:29.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.025 "dma_device_type": 2 00:10:29.025 }, 00:10:29.025 { 00:10:29.025 "dma_device_id": "system", 00:10:29.025 "dma_device_type": 1 00:10:29.025 }, 00:10:29.025 { 00:10:29.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.025 "dma_device_type": 2 00:10:29.025 }, 00:10:29.025 { 00:10:29.025 "dma_device_id": "system", 00:10:29.025 "dma_device_type": 1 00:10:29.025 }, 00:10:29.025 { 00:10:29.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.025 "dma_device_type": 2 00:10:29.025 } 00:10:29.025 ], 00:10:29.025 "driver_specific": { 00:10:29.025 "raid": { 00:10:29.025 "uuid": "47e85521-21b4-499f-b53e-77187a79bc62", 00:10:29.025 "strip_size_kb": 64, 00:10:29.025 "state": "online", 00:10:29.025 "raid_level": "raid0", 00:10:29.025 "superblock": false, 00:10:29.025 "num_base_bdevs": 3, 00:10:29.025 "num_base_bdevs_discovered": 3, 00:10:29.025 "num_base_bdevs_operational": 3, 00:10:29.025 "base_bdevs_list": [ 00:10:29.025 { 00:10:29.025 "name": "BaseBdev1", 00:10:29.025 "uuid": "b8103124-808a-499f-806b-f5262cf4ea31", 00:10:29.025 "is_configured": true, 00:10:29.025 "data_offset": 0, 00:10:29.026 "data_size": 65536 00:10:29.026 }, 00:10:29.026 { 00:10:29.026 "name": "BaseBdev2", 00:10:29.026 "uuid": "0908b2d6-dce4-4e03-a0b6-32d2916e4c68", 00:10:29.026 "is_configured": true, 00:10:29.026 "data_offset": 0, 00:10:29.026 "data_size": 65536 00:10:29.026 }, 00:10:29.026 { 00:10:29.026 "name": "BaseBdev3", 00:10:29.026 "uuid": "fb645eed-196c-4fff-8f5b-4572eb43c661", 00:10:29.026 "is_configured": true, 00:10:29.026 "data_offset": 0, 00:10:29.026 "data_size": 65536 00:10:29.026 } 00:10:29.026 ] 00:10:29.026 } 00:10:29.026 } 00:10:29.026 }' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.026 BaseBdev2 00:10:29.026 BaseBdev3' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.026 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.284 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.284 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.284 08:43:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.285 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.285 08:43:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.285 [2024-11-20 08:43:59.959119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.285 [2024-11-20 08:43:59.959172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.285 [2024-11-20 08:43:59.959246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.285 "name": "Existed_Raid", 00:10:29.285 "uuid": "47e85521-21b4-499f-b53e-77187a79bc62", 00:10:29.285 "strip_size_kb": 64, 00:10:29.285 "state": "offline", 00:10:29.285 "raid_level": "raid0", 00:10:29.285 "superblock": false, 00:10:29.285 "num_base_bdevs": 3, 00:10:29.285 "num_base_bdevs_discovered": 2, 00:10:29.285 "num_base_bdevs_operational": 2, 00:10:29.285 "base_bdevs_list": [ 00:10:29.285 { 00:10:29.285 "name": null, 00:10:29.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.285 "is_configured": false, 00:10:29.285 "data_offset": 0, 00:10:29.285 "data_size": 65536 00:10:29.285 }, 00:10:29.285 { 00:10:29.285 "name": "BaseBdev2", 00:10:29.285 "uuid": "0908b2d6-dce4-4e03-a0b6-32d2916e4c68", 00:10:29.285 "is_configured": true, 00:10:29.285 "data_offset": 0, 00:10:29.285 "data_size": 65536 00:10:29.285 }, 00:10:29.285 { 00:10:29.285 "name": "BaseBdev3", 00:10:29.285 "uuid": "fb645eed-196c-4fff-8f5b-4572eb43c661", 00:10:29.285 "is_configured": true, 00:10:29.285 "data_offset": 0, 00:10:29.285 "data_size": 65536 00:10:29.285 } 00:10:29.285 ] 00:10:29.285 }' 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.285 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.851 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.852 [2024-11-20 08:44:00.565127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.852 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.852 [2024-11-20 08:44:00.709710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.852 [2024-11-20 08:44:00.709789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:30.111 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 BaseBdev2 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 [ 00:10:30.112 { 00:10:30.112 "name": "BaseBdev2", 00:10:30.112 "aliases": [ 00:10:30.112 "e008597e-9c37-46a5-81d5-540a26e64fe3" 00:10:30.112 ], 00:10:30.112 "product_name": "Malloc disk", 00:10:30.112 "block_size": 512, 00:10:30.112 "num_blocks": 65536, 00:10:30.112 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:30.112 "assigned_rate_limits": { 00:10:30.112 "rw_ios_per_sec": 0, 00:10:30.112 "rw_mbytes_per_sec": 0, 00:10:30.112 "r_mbytes_per_sec": 0, 00:10:30.112 "w_mbytes_per_sec": 0 00:10:30.112 }, 00:10:30.112 "claimed": false, 00:10:30.112 "zoned": false, 00:10:30.112 "supported_io_types": { 00:10:30.112 "read": true, 00:10:30.112 "write": true, 00:10:30.112 "unmap": true, 00:10:30.112 "flush": true, 00:10:30.112 "reset": true, 00:10:30.112 "nvme_admin": false, 00:10:30.112 "nvme_io": false, 00:10:30.112 "nvme_io_md": false, 00:10:30.112 "write_zeroes": true, 00:10:30.112 "zcopy": true, 00:10:30.112 "get_zone_info": false, 00:10:30.112 "zone_management": false, 00:10:30.112 "zone_append": false, 00:10:30.112 "compare": false, 00:10:30.112 "compare_and_write": false, 00:10:30.112 "abort": true, 00:10:30.112 "seek_hole": false, 00:10:30.112 "seek_data": false, 00:10:30.112 "copy": true, 00:10:30.112 "nvme_iov_md": false 00:10:30.112 }, 00:10:30.112 "memory_domains": [ 00:10:30.112 { 00:10:30.112 "dma_device_id": "system", 00:10:30.112 "dma_device_type": 1 00:10:30.112 }, 00:10:30.112 { 00:10:30.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.112 "dma_device_type": 2 00:10:30.112 } 00:10:30.112 ], 00:10:30.112 "driver_specific": {} 00:10:30.112 } 00:10:30.112 ] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 BaseBdev3 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 [ 00:10:30.112 { 00:10:30.112 "name": "BaseBdev3", 00:10:30.112 "aliases": [ 00:10:30.112 "c17d0f88-cc33-4745-bef3-8bbb21835d6c" 00:10:30.112 ], 00:10:30.112 "product_name": "Malloc disk", 00:10:30.112 "block_size": 512, 00:10:30.112 "num_blocks": 65536, 00:10:30.112 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:30.112 "assigned_rate_limits": { 00:10:30.112 "rw_ios_per_sec": 0, 00:10:30.112 "rw_mbytes_per_sec": 0, 00:10:30.112 "r_mbytes_per_sec": 0, 00:10:30.112 "w_mbytes_per_sec": 0 00:10:30.112 }, 00:10:30.112 "claimed": false, 00:10:30.112 "zoned": false, 00:10:30.112 "supported_io_types": { 00:10:30.112 "read": true, 00:10:30.112 "write": true, 00:10:30.112 "unmap": true, 00:10:30.112 "flush": true, 00:10:30.112 "reset": true, 00:10:30.112 "nvme_admin": false, 00:10:30.112 "nvme_io": false, 00:10:30.112 "nvme_io_md": false, 00:10:30.112 "write_zeroes": true, 00:10:30.112 "zcopy": true, 00:10:30.112 "get_zone_info": false, 00:10:30.112 "zone_management": false, 00:10:30.112 "zone_append": false, 00:10:30.112 "compare": false, 00:10:30.112 "compare_and_write": false, 00:10:30.112 "abort": true, 00:10:30.112 "seek_hole": false, 00:10:30.112 "seek_data": false, 00:10:30.112 "copy": true, 00:10:30.112 "nvme_iov_md": false 00:10:30.112 }, 00:10:30.112 "memory_domains": [ 00:10:30.112 { 00:10:30.112 "dma_device_id": "system", 00:10:30.112 "dma_device_type": 1 00:10:30.112 }, 00:10:30.112 { 00:10:30.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.112 "dma_device_type": 2 00:10:30.112 } 00:10:30.112 ], 00:10:30.112 "driver_specific": {} 00:10:30.112 } 00:10:30.112 ] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 [2024-11-20 08:44:01.003305] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.112 [2024-11-20 08:44:01.003360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.112 [2024-11-20 08:44:01.003408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.112 [2024-11-20 08:44:01.005838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.112 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.371 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.371 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.371 "name": "Existed_Raid", 00:10:30.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.371 "strip_size_kb": 64, 00:10:30.371 "state": "configuring", 00:10:30.371 "raid_level": "raid0", 00:10:30.371 "superblock": false, 00:10:30.371 "num_base_bdevs": 3, 00:10:30.372 "num_base_bdevs_discovered": 2, 00:10:30.372 "num_base_bdevs_operational": 3, 00:10:30.372 "base_bdevs_list": [ 00:10:30.372 { 00:10:30.372 "name": "BaseBdev1", 00:10:30.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.372 "is_configured": false, 00:10:30.372 "data_offset": 0, 00:10:30.372 "data_size": 0 00:10:30.372 }, 00:10:30.372 { 00:10:30.372 "name": "BaseBdev2", 00:10:30.372 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:30.372 "is_configured": true, 00:10:30.372 "data_offset": 0, 00:10:30.372 "data_size": 65536 00:10:30.372 }, 00:10:30.372 { 00:10:30.372 "name": "BaseBdev3", 00:10:30.372 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:30.372 "is_configured": true, 00:10:30.372 "data_offset": 0, 00:10:30.372 "data_size": 65536 00:10:30.372 } 00:10:30.372 ] 00:10:30.372 }' 00:10:30.372 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.372 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.630 [2024-11-20 08:44:01.527493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.630 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.889 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.889 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.889 "name": "Existed_Raid", 00:10:30.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.889 "strip_size_kb": 64, 00:10:30.889 "state": "configuring", 00:10:30.889 "raid_level": "raid0", 00:10:30.889 "superblock": false, 00:10:30.889 "num_base_bdevs": 3, 00:10:30.889 "num_base_bdevs_discovered": 1, 00:10:30.889 "num_base_bdevs_operational": 3, 00:10:30.889 "base_bdevs_list": [ 00:10:30.889 { 00:10:30.889 "name": "BaseBdev1", 00:10:30.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.889 "is_configured": false, 00:10:30.889 "data_offset": 0, 00:10:30.889 "data_size": 0 00:10:30.889 }, 00:10:30.889 { 00:10:30.889 "name": null, 00:10:30.889 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:30.889 "is_configured": false, 00:10:30.889 "data_offset": 0, 00:10:30.889 "data_size": 65536 00:10:30.889 }, 00:10:30.889 { 00:10:30.889 "name": "BaseBdev3", 00:10:30.889 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:30.889 "is_configured": true, 00:10:30.889 "data_offset": 0, 00:10:30.889 "data_size": 65536 00:10:30.889 } 00:10:30.889 ] 00:10:30.889 }' 00:10:30.889 08:44:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.889 08:44:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.457 [2024-11-20 08:44:02.189884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.457 BaseBdev1 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.457 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.457 [ 00:10:31.457 { 00:10:31.457 "name": "BaseBdev1", 00:10:31.457 "aliases": [ 00:10:31.457 "b86dd0ad-f82d-4ff2-b826-a2107168c5e3" 00:10:31.457 ], 00:10:31.457 "product_name": "Malloc disk", 00:10:31.457 "block_size": 512, 00:10:31.457 "num_blocks": 65536, 00:10:31.457 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:31.457 "assigned_rate_limits": { 00:10:31.457 "rw_ios_per_sec": 0, 00:10:31.457 "rw_mbytes_per_sec": 0, 00:10:31.457 "r_mbytes_per_sec": 0, 00:10:31.457 "w_mbytes_per_sec": 0 00:10:31.457 }, 00:10:31.457 "claimed": true, 00:10:31.457 "claim_type": "exclusive_write", 00:10:31.457 "zoned": false, 00:10:31.457 "supported_io_types": { 00:10:31.457 "read": true, 00:10:31.457 "write": true, 00:10:31.457 "unmap": true, 00:10:31.457 "flush": true, 00:10:31.457 "reset": true, 00:10:31.457 "nvme_admin": false, 00:10:31.457 "nvme_io": false, 00:10:31.457 "nvme_io_md": false, 00:10:31.457 "write_zeroes": true, 00:10:31.457 "zcopy": true, 00:10:31.457 "get_zone_info": false, 00:10:31.457 "zone_management": false, 00:10:31.457 "zone_append": false, 00:10:31.457 "compare": false, 00:10:31.457 "compare_and_write": false, 00:10:31.457 "abort": true, 00:10:31.457 "seek_hole": false, 00:10:31.457 "seek_data": false, 00:10:31.458 "copy": true, 00:10:31.458 "nvme_iov_md": false 00:10:31.458 }, 00:10:31.458 "memory_domains": [ 00:10:31.458 { 00:10:31.458 "dma_device_id": "system", 00:10:31.458 "dma_device_type": 1 00:10:31.458 }, 00:10:31.458 { 00:10:31.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.458 "dma_device_type": 2 00:10:31.458 } 00:10:31.458 ], 00:10:31.458 "driver_specific": {} 00:10:31.458 } 00:10:31.458 ] 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.458 "name": "Existed_Raid", 00:10:31.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.458 "strip_size_kb": 64, 00:10:31.458 "state": "configuring", 00:10:31.458 "raid_level": "raid0", 00:10:31.458 "superblock": false, 00:10:31.458 "num_base_bdevs": 3, 00:10:31.458 "num_base_bdevs_discovered": 2, 00:10:31.458 "num_base_bdevs_operational": 3, 00:10:31.458 "base_bdevs_list": [ 00:10:31.458 { 00:10:31.458 "name": "BaseBdev1", 00:10:31.458 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:31.458 "is_configured": true, 00:10:31.458 "data_offset": 0, 00:10:31.458 "data_size": 65536 00:10:31.458 }, 00:10:31.458 { 00:10:31.458 "name": null, 00:10:31.458 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:31.458 "is_configured": false, 00:10:31.458 "data_offset": 0, 00:10:31.458 "data_size": 65536 00:10:31.458 }, 00:10:31.458 { 00:10:31.458 "name": "BaseBdev3", 00:10:31.458 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:31.458 "is_configured": true, 00:10:31.458 "data_offset": 0, 00:10:31.458 "data_size": 65536 00:10:31.458 } 00:10:31.458 ] 00:10:31.458 }' 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.458 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.026 [2024-11-20 08:44:02.790079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.026 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.026 "name": "Existed_Raid", 00:10:32.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.026 "strip_size_kb": 64, 00:10:32.026 "state": "configuring", 00:10:32.026 "raid_level": "raid0", 00:10:32.026 "superblock": false, 00:10:32.026 "num_base_bdevs": 3, 00:10:32.026 "num_base_bdevs_discovered": 1, 00:10:32.026 "num_base_bdevs_operational": 3, 00:10:32.026 "base_bdevs_list": [ 00:10:32.026 { 00:10:32.026 "name": "BaseBdev1", 00:10:32.026 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:32.026 "is_configured": true, 00:10:32.026 "data_offset": 0, 00:10:32.026 "data_size": 65536 00:10:32.026 }, 00:10:32.027 { 00:10:32.027 "name": null, 00:10:32.027 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:32.027 "is_configured": false, 00:10:32.027 "data_offset": 0, 00:10:32.027 "data_size": 65536 00:10:32.027 }, 00:10:32.027 { 00:10:32.027 "name": null, 00:10:32.027 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:32.027 "is_configured": false, 00:10:32.027 "data_offset": 0, 00:10:32.027 "data_size": 65536 00:10:32.027 } 00:10:32.027 ] 00:10:32.027 }' 00:10:32.027 08:44:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.027 08:44:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.594 [2024-11-20 08:44:03.390359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.594 "name": "Existed_Raid", 00:10:32.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.594 "strip_size_kb": 64, 00:10:32.594 "state": "configuring", 00:10:32.594 "raid_level": "raid0", 00:10:32.594 "superblock": false, 00:10:32.594 "num_base_bdevs": 3, 00:10:32.594 "num_base_bdevs_discovered": 2, 00:10:32.594 "num_base_bdevs_operational": 3, 00:10:32.594 "base_bdevs_list": [ 00:10:32.594 { 00:10:32.594 "name": "BaseBdev1", 00:10:32.594 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:32.594 "is_configured": true, 00:10:32.594 "data_offset": 0, 00:10:32.594 "data_size": 65536 00:10:32.594 }, 00:10:32.594 { 00:10:32.594 "name": null, 00:10:32.594 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:32.594 "is_configured": false, 00:10:32.594 "data_offset": 0, 00:10:32.594 "data_size": 65536 00:10:32.594 }, 00:10:32.594 { 00:10:32.594 "name": "BaseBdev3", 00:10:32.594 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:32.594 "is_configured": true, 00:10:32.594 "data_offset": 0, 00:10:32.594 "data_size": 65536 00:10:32.594 } 00:10:32.594 ] 00:10:32.594 }' 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.594 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.161 08:44:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.161 [2024-11-20 08:44:03.982613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.161 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.421 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.421 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.421 "name": "Existed_Raid", 00:10:33.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.421 "strip_size_kb": 64, 00:10:33.421 "state": "configuring", 00:10:33.421 "raid_level": "raid0", 00:10:33.421 "superblock": false, 00:10:33.421 "num_base_bdevs": 3, 00:10:33.421 "num_base_bdevs_discovered": 1, 00:10:33.421 "num_base_bdevs_operational": 3, 00:10:33.421 "base_bdevs_list": [ 00:10:33.421 { 00:10:33.421 "name": null, 00:10:33.421 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:33.421 "is_configured": false, 00:10:33.421 "data_offset": 0, 00:10:33.421 "data_size": 65536 00:10:33.421 }, 00:10:33.421 { 00:10:33.421 "name": null, 00:10:33.421 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:33.421 "is_configured": false, 00:10:33.421 "data_offset": 0, 00:10:33.421 "data_size": 65536 00:10:33.421 }, 00:10:33.421 { 00:10:33.421 "name": "BaseBdev3", 00:10:33.421 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:33.421 "is_configured": true, 00:10:33.421 "data_offset": 0, 00:10:33.421 "data_size": 65536 00:10:33.421 } 00:10:33.421 ] 00:10:33.421 }' 00:10:33.421 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.421 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 [2024-11-20 08:44:04.680979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.012 "name": "Existed_Raid", 00:10:34.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.012 "strip_size_kb": 64, 00:10:34.012 "state": "configuring", 00:10:34.012 "raid_level": "raid0", 00:10:34.012 "superblock": false, 00:10:34.012 "num_base_bdevs": 3, 00:10:34.012 "num_base_bdevs_discovered": 2, 00:10:34.012 "num_base_bdevs_operational": 3, 00:10:34.012 "base_bdevs_list": [ 00:10:34.012 { 00:10:34.012 "name": null, 00:10:34.012 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:34.012 "is_configured": false, 00:10:34.012 "data_offset": 0, 00:10:34.012 "data_size": 65536 00:10:34.012 }, 00:10:34.012 { 00:10:34.012 "name": "BaseBdev2", 00:10:34.012 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:34.012 "is_configured": true, 00:10:34.012 "data_offset": 0, 00:10:34.012 "data_size": 65536 00:10:34.012 }, 00:10:34.012 { 00:10:34.012 "name": "BaseBdev3", 00:10:34.012 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:34.012 "is_configured": true, 00:10:34.012 "data_offset": 0, 00:10:34.012 "data_size": 65536 00:10:34.012 } 00:10:34.012 ] 00:10:34.012 }' 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.012 08:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b86dd0ad-f82d-4ff2-b826-a2107168c5e3 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.579 [2024-11-20 08:44:05.342456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:34.579 [2024-11-20 08:44:05.342502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.579 [2024-11-20 08:44:05.342517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:34.579 [2024-11-20 08:44:05.342809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:34.579 [2024-11-20 08:44:05.343011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.579 [2024-11-20 08:44:05.343026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:34.579 [2024-11-20 08:44:05.343382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.579 NewBaseBdev 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.579 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.579 [ 00:10:34.579 { 00:10:34.579 "name": "NewBaseBdev", 00:10:34.579 "aliases": [ 00:10:34.579 "b86dd0ad-f82d-4ff2-b826-a2107168c5e3" 00:10:34.579 ], 00:10:34.579 "product_name": "Malloc disk", 00:10:34.579 "block_size": 512, 00:10:34.579 "num_blocks": 65536, 00:10:34.579 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:34.579 "assigned_rate_limits": { 00:10:34.579 "rw_ios_per_sec": 0, 00:10:34.579 "rw_mbytes_per_sec": 0, 00:10:34.579 "r_mbytes_per_sec": 0, 00:10:34.579 "w_mbytes_per_sec": 0 00:10:34.579 }, 00:10:34.579 "claimed": true, 00:10:34.579 "claim_type": "exclusive_write", 00:10:34.579 "zoned": false, 00:10:34.579 "supported_io_types": { 00:10:34.579 "read": true, 00:10:34.579 "write": true, 00:10:34.579 "unmap": true, 00:10:34.579 "flush": true, 00:10:34.579 "reset": true, 00:10:34.579 "nvme_admin": false, 00:10:34.579 "nvme_io": false, 00:10:34.579 "nvme_io_md": false, 00:10:34.579 "write_zeroes": true, 00:10:34.579 "zcopy": true, 00:10:34.579 "get_zone_info": false, 00:10:34.579 "zone_management": false, 00:10:34.579 "zone_append": false, 00:10:34.579 "compare": false, 00:10:34.579 "compare_and_write": false, 00:10:34.579 "abort": true, 00:10:34.579 "seek_hole": false, 00:10:34.579 "seek_data": false, 00:10:34.580 "copy": true, 00:10:34.580 "nvme_iov_md": false 00:10:34.580 }, 00:10:34.580 "memory_domains": [ 00:10:34.580 { 00:10:34.580 "dma_device_id": "system", 00:10:34.580 "dma_device_type": 1 00:10:34.580 }, 00:10:34.580 { 00:10:34.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.580 "dma_device_type": 2 00:10:34.580 } 00:10:34.580 ], 00:10:34.580 "driver_specific": {} 00:10:34.580 } 00:10:34.580 ] 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.580 "name": "Existed_Raid", 00:10:34.580 "uuid": "6064b07a-631b-4f63-aa16-9456268b6084", 00:10:34.580 "strip_size_kb": 64, 00:10:34.580 "state": "online", 00:10:34.580 "raid_level": "raid0", 00:10:34.580 "superblock": false, 00:10:34.580 "num_base_bdevs": 3, 00:10:34.580 "num_base_bdevs_discovered": 3, 00:10:34.580 "num_base_bdevs_operational": 3, 00:10:34.580 "base_bdevs_list": [ 00:10:34.580 { 00:10:34.580 "name": "NewBaseBdev", 00:10:34.580 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:34.580 "is_configured": true, 00:10:34.580 "data_offset": 0, 00:10:34.580 "data_size": 65536 00:10:34.580 }, 00:10:34.580 { 00:10:34.580 "name": "BaseBdev2", 00:10:34.580 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:34.580 "is_configured": true, 00:10:34.580 "data_offset": 0, 00:10:34.580 "data_size": 65536 00:10:34.580 }, 00:10:34.580 { 00:10:34.580 "name": "BaseBdev3", 00:10:34.580 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:34.580 "is_configured": true, 00:10:34.580 "data_offset": 0, 00:10:34.580 "data_size": 65536 00:10:34.580 } 00:10:34.580 ] 00:10:34.580 }' 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.580 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.148 [2024-11-20 08:44:05.903054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.148 "name": "Existed_Raid", 00:10:35.148 "aliases": [ 00:10:35.148 "6064b07a-631b-4f63-aa16-9456268b6084" 00:10:35.148 ], 00:10:35.148 "product_name": "Raid Volume", 00:10:35.148 "block_size": 512, 00:10:35.148 "num_blocks": 196608, 00:10:35.148 "uuid": "6064b07a-631b-4f63-aa16-9456268b6084", 00:10:35.148 "assigned_rate_limits": { 00:10:35.148 "rw_ios_per_sec": 0, 00:10:35.148 "rw_mbytes_per_sec": 0, 00:10:35.148 "r_mbytes_per_sec": 0, 00:10:35.148 "w_mbytes_per_sec": 0 00:10:35.148 }, 00:10:35.148 "claimed": false, 00:10:35.148 "zoned": false, 00:10:35.148 "supported_io_types": { 00:10:35.148 "read": true, 00:10:35.148 "write": true, 00:10:35.148 "unmap": true, 00:10:35.148 "flush": true, 00:10:35.148 "reset": true, 00:10:35.148 "nvme_admin": false, 00:10:35.148 "nvme_io": false, 00:10:35.148 "nvme_io_md": false, 00:10:35.148 "write_zeroes": true, 00:10:35.148 "zcopy": false, 00:10:35.148 "get_zone_info": false, 00:10:35.148 "zone_management": false, 00:10:35.148 "zone_append": false, 00:10:35.148 "compare": false, 00:10:35.148 "compare_and_write": false, 00:10:35.148 "abort": false, 00:10:35.148 "seek_hole": false, 00:10:35.148 "seek_data": false, 00:10:35.148 "copy": false, 00:10:35.148 "nvme_iov_md": false 00:10:35.148 }, 00:10:35.148 "memory_domains": [ 00:10:35.148 { 00:10:35.148 "dma_device_id": "system", 00:10:35.148 "dma_device_type": 1 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.148 "dma_device_type": 2 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "dma_device_id": "system", 00:10:35.148 "dma_device_type": 1 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.148 "dma_device_type": 2 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "dma_device_id": "system", 00:10:35.148 "dma_device_type": 1 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.148 "dma_device_type": 2 00:10:35.148 } 00:10:35.148 ], 00:10:35.148 "driver_specific": { 00:10:35.148 "raid": { 00:10:35.148 "uuid": "6064b07a-631b-4f63-aa16-9456268b6084", 00:10:35.148 "strip_size_kb": 64, 00:10:35.148 "state": "online", 00:10:35.148 "raid_level": "raid0", 00:10:35.148 "superblock": false, 00:10:35.148 "num_base_bdevs": 3, 00:10:35.148 "num_base_bdevs_discovered": 3, 00:10:35.148 "num_base_bdevs_operational": 3, 00:10:35.148 "base_bdevs_list": [ 00:10:35.148 { 00:10:35.148 "name": "NewBaseBdev", 00:10:35.148 "uuid": "b86dd0ad-f82d-4ff2-b826-a2107168c5e3", 00:10:35.148 "is_configured": true, 00:10:35.148 "data_offset": 0, 00:10:35.148 "data_size": 65536 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "name": "BaseBdev2", 00:10:35.148 "uuid": "e008597e-9c37-46a5-81d5-540a26e64fe3", 00:10:35.148 "is_configured": true, 00:10:35.148 "data_offset": 0, 00:10:35.148 "data_size": 65536 00:10:35.148 }, 00:10:35.148 { 00:10:35.148 "name": "BaseBdev3", 00:10:35.148 "uuid": "c17d0f88-cc33-4745-bef3-8bbb21835d6c", 00:10:35.148 "is_configured": true, 00:10:35.148 "data_offset": 0, 00:10:35.148 "data_size": 65536 00:10:35.148 } 00:10:35.148 ] 00:10:35.148 } 00:10:35.148 } 00:10:35.148 }' 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:35.148 BaseBdev2 00:10:35.148 BaseBdev3' 00:10:35.148 08:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.148 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.148 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.148 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:35.148 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.148 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.148 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.408 [2024-11-20 08:44:06.210774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.408 [2024-11-20 08:44:06.210805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.408 [2024-11-20 08:44:06.210915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.408 [2024-11-20 08:44:06.210984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.408 [2024-11-20 08:44:06.211002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63800 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63800 ']' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63800 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63800 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63800' 00:10:35.408 killing process with pid 63800 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63800 00:10:35.408 [2024-11-20 08:44:06.250323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.408 08:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63800 00:10:35.667 [2024-11-20 08:44:06.514874] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.042 00:10:37.042 real 0m11.724s 00:10:37.042 user 0m19.529s 00:10:37.042 sys 0m1.542s 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 ************************************ 00:10:37.042 END TEST raid_state_function_test 00:10:37.042 ************************************ 00:10:37.042 08:44:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:37.042 08:44:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:37.042 08:44:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.042 08:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 ************************************ 00:10:37.042 START TEST raid_state_function_test_sb 00:10:37.042 ************************************ 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.042 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64434 00:10:37.043 Process raid pid: 64434 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64434' 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64434 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64434 ']' 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.043 08:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.043 [2024-11-20 08:44:07.715335] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:37.043 [2024-11-20 08:44:07.715507] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.043 [2024-11-20 08:44:07.907723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.309 [2024-11-20 08:44:08.062952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.614 [2024-11-20 08:44:08.269730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.614 [2024-11-20 08:44:08.269781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 [2024-11-20 08:44:08.726256] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.873 [2024-11-20 08:44:08.726327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.873 [2024-11-20 08:44:08.726345] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.873 [2024-11-20 08:44:08.726362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.873 [2024-11-20 08:44:08.726372] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.873 [2024-11-20 08:44:08.726387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.873 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.873 "name": "Existed_Raid", 00:10:37.873 "uuid": "855ea1f6-c533-4433-8dd0-263225bfda8e", 00:10:37.873 "strip_size_kb": 64, 00:10:37.873 "state": "configuring", 00:10:37.873 "raid_level": "raid0", 00:10:37.873 "superblock": true, 00:10:37.873 "num_base_bdevs": 3, 00:10:37.873 "num_base_bdevs_discovered": 0, 00:10:37.873 "num_base_bdevs_operational": 3, 00:10:37.873 "base_bdevs_list": [ 00:10:37.873 { 00:10:37.873 "name": "BaseBdev1", 00:10:37.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.873 "is_configured": false, 00:10:37.873 "data_offset": 0, 00:10:37.873 "data_size": 0 00:10:37.873 }, 00:10:37.873 { 00:10:37.873 "name": "BaseBdev2", 00:10:37.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.873 "is_configured": false, 00:10:37.873 "data_offset": 0, 00:10:37.873 "data_size": 0 00:10:37.873 }, 00:10:37.873 { 00:10:37.873 "name": "BaseBdev3", 00:10:37.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.873 "is_configured": false, 00:10:37.873 "data_offset": 0, 00:10:37.873 "data_size": 0 00:10:37.874 } 00:10:37.874 ] 00:10:37.874 }' 00:10:37.874 08:44:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.874 08:44:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.440 [2024-11-20 08:44:09.242312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.440 [2024-11-20 08:44:09.242366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.440 [2024-11-20 08:44:09.254323] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.440 [2024-11-20 08:44:09.254518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.440 [2024-11-20 08:44:09.254653] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.440 [2024-11-20 08:44:09.254832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.440 [2024-11-20 08:44:09.254949] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.440 [2024-11-20 08:44:09.255097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.440 [2024-11-20 08:44:09.303079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.440 BaseBdev1 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.440 [ 00:10:38.440 { 00:10:38.440 "name": "BaseBdev1", 00:10:38.440 "aliases": [ 00:10:38.440 "e6057a89-bc0a-4946-bb45-4c3d4e5bd396" 00:10:38.440 ], 00:10:38.440 "product_name": "Malloc disk", 00:10:38.440 "block_size": 512, 00:10:38.440 "num_blocks": 65536, 00:10:38.440 "uuid": "e6057a89-bc0a-4946-bb45-4c3d4e5bd396", 00:10:38.440 "assigned_rate_limits": { 00:10:38.440 "rw_ios_per_sec": 0, 00:10:38.440 "rw_mbytes_per_sec": 0, 00:10:38.440 "r_mbytes_per_sec": 0, 00:10:38.440 "w_mbytes_per_sec": 0 00:10:38.440 }, 00:10:38.440 "claimed": true, 00:10:38.440 "claim_type": "exclusive_write", 00:10:38.440 "zoned": false, 00:10:38.440 "supported_io_types": { 00:10:38.440 "read": true, 00:10:38.440 "write": true, 00:10:38.440 "unmap": true, 00:10:38.440 "flush": true, 00:10:38.440 "reset": true, 00:10:38.440 "nvme_admin": false, 00:10:38.440 "nvme_io": false, 00:10:38.440 "nvme_io_md": false, 00:10:38.440 "write_zeroes": true, 00:10:38.440 "zcopy": true, 00:10:38.440 "get_zone_info": false, 00:10:38.440 "zone_management": false, 00:10:38.440 "zone_append": false, 00:10:38.440 "compare": false, 00:10:38.440 "compare_and_write": false, 00:10:38.440 "abort": true, 00:10:38.440 "seek_hole": false, 00:10:38.440 "seek_data": false, 00:10:38.440 "copy": true, 00:10:38.440 "nvme_iov_md": false 00:10:38.440 }, 00:10:38.440 "memory_domains": [ 00:10:38.440 { 00:10:38.440 "dma_device_id": "system", 00:10:38.440 "dma_device_type": 1 00:10:38.440 }, 00:10:38.440 { 00:10:38.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.440 "dma_device_type": 2 00:10:38.440 } 00:10:38.440 ], 00:10:38.440 "driver_specific": {} 00:10:38.440 } 00:10:38.440 ] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.440 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.698 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.698 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.698 "name": "Existed_Raid", 00:10:38.698 "uuid": "301b6ccd-b067-4b0f-983d-a289aff82504", 00:10:38.698 "strip_size_kb": 64, 00:10:38.698 "state": "configuring", 00:10:38.698 "raid_level": "raid0", 00:10:38.698 "superblock": true, 00:10:38.698 "num_base_bdevs": 3, 00:10:38.698 "num_base_bdevs_discovered": 1, 00:10:38.698 "num_base_bdevs_operational": 3, 00:10:38.698 "base_bdevs_list": [ 00:10:38.698 { 00:10:38.698 "name": "BaseBdev1", 00:10:38.698 "uuid": "e6057a89-bc0a-4946-bb45-4c3d4e5bd396", 00:10:38.698 "is_configured": true, 00:10:38.698 "data_offset": 2048, 00:10:38.698 "data_size": 63488 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "name": "BaseBdev2", 00:10:38.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.698 "is_configured": false, 00:10:38.698 "data_offset": 0, 00:10:38.698 "data_size": 0 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "name": "BaseBdev3", 00:10:38.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.698 "is_configured": false, 00:10:38.698 "data_offset": 0, 00:10:38.698 "data_size": 0 00:10:38.698 } 00:10:38.698 ] 00:10:38.698 }' 00:10:38.698 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.698 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.957 [2024-11-20 08:44:09.839254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.957 [2024-11-20 08:44:09.839324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.957 [2024-11-20 08:44:09.851314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.957 [2024-11-20 08:44:09.853885] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.957 [2024-11-20 08:44:09.853941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.957 [2024-11-20 08:44:09.853958] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.957 [2024-11-20 08:44:09.853974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.957 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.216 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.216 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.216 "name": "Existed_Raid", 00:10:39.216 "uuid": "11c56a11-90c6-459e-9c50-743d82d833a4", 00:10:39.216 "strip_size_kb": 64, 00:10:39.216 "state": "configuring", 00:10:39.216 "raid_level": "raid0", 00:10:39.216 "superblock": true, 00:10:39.216 "num_base_bdevs": 3, 00:10:39.216 "num_base_bdevs_discovered": 1, 00:10:39.216 "num_base_bdevs_operational": 3, 00:10:39.216 "base_bdevs_list": [ 00:10:39.216 { 00:10:39.216 "name": "BaseBdev1", 00:10:39.216 "uuid": "e6057a89-bc0a-4946-bb45-4c3d4e5bd396", 00:10:39.216 "is_configured": true, 00:10:39.216 "data_offset": 2048, 00:10:39.216 "data_size": 63488 00:10:39.216 }, 00:10:39.216 { 00:10:39.216 "name": "BaseBdev2", 00:10:39.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.216 "is_configured": false, 00:10:39.216 "data_offset": 0, 00:10:39.216 "data_size": 0 00:10:39.216 }, 00:10:39.216 { 00:10:39.216 "name": "BaseBdev3", 00:10:39.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.216 "is_configured": false, 00:10:39.216 "data_offset": 0, 00:10:39.216 "data_size": 0 00:10:39.216 } 00:10:39.216 ] 00:10:39.216 }' 00:10:39.216 08:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.216 08:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.475 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.475 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.475 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.734 [2024-11-20 08:44:10.393791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.734 BaseBdev2 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.734 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.734 [ 00:10:39.734 { 00:10:39.734 "name": "BaseBdev2", 00:10:39.734 "aliases": [ 00:10:39.734 "ef5b6fe6-69af-46d4-a76e-5e293d95a570" 00:10:39.734 ], 00:10:39.734 "product_name": "Malloc disk", 00:10:39.734 "block_size": 512, 00:10:39.734 "num_blocks": 65536, 00:10:39.734 "uuid": "ef5b6fe6-69af-46d4-a76e-5e293d95a570", 00:10:39.734 "assigned_rate_limits": { 00:10:39.734 "rw_ios_per_sec": 0, 00:10:39.734 "rw_mbytes_per_sec": 0, 00:10:39.734 "r_mbytes_per_sec": 0, 00:10:39.734 "w_mbytes_per_sec": 0 00:10:39.734 }, 00:10:39.734 "claimed": true, 00:10:39.734 "claim_type": "exclusive_write", 00:10:39.735 "zoned": false, 00:10:39.735 "supported_io_types": { 00:10:39.735 "read": true, 00:10:39.735 "write": true, 00:10:39.735 "unmap": true, 00:10:39.735 "flush": true, 00:10:39.735 "reset": true, 00:10:39.735 "nvme_admin": false, 00:10:39.735 "nvme_io": false, 00:10:39.735 "nvme_io_md": false, 00:10:39.735 "write_zeroes": true, 00:10:39.735 "zcopy": true, 00:10:39.735 "get_zone_info": false, 00:10:39.735 "zone_management": false, 00:10:39.735 "zone_append": false, 00:10:39.735 "compare": false, 00:10:39.735 "compare_and_write": false, 00:10:39.735 "abort": true, 00:10:39.735 "seek_hole": false, 00:10:39.735 "seek_data": false, 00:10:39.735 "copy": true, 00:10:39.735 "nvme_iov_md": false 00:10:39.735 }, 00:10:39.735 "memory_domains": [ 00:10:39.735 { 00:10:39.735 "dma_device_id": "system", 00:10:39.735 "dma_device_type": 1 00:10:39.735 }, 00:10:39.735 { 00:10:39.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.735 "dma_device_type": 2 00:10:39.735 } 00:10:39.735 ], 00:10:39.735 "driver_specific": {} 00:10:39.735 } 00:10:39.735 ] 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.735 "name": "Existed_Raid", 00:10:39.735 "uuid": "11c56a11-90c6-459e-9c50-743d82d833a4", 00:10:39.735 "strip_size_kb": 64, 00:10:39.735 "state": "configuring", 00:10:39.735 "raid_level": "raid0", 00:10:39.735 "superblock": true, 00:10:39.735 "num_base_bdevs": 3, 00:10:39.735 "num_base_bdevs_discovered": 2, 00:10:39.735 "num_base_bdevs_operational": 3, 00:10:39.735 "base_bdevs_list": [ 00:10:39.735 { 00:10:39.735 "name": "BaseBdev1", 00:10:39.735 "uuid": "e6057a89-bc0a-4946-bb45-4c3d4e5bd396", 00:10:39.735 "is_configured": true, 00:10:39.735 "data_offset": 2048, 00:10:39.735 "data_size": 63488 00:10:39.735 }, 00:10:39.735 { 00:10:39.735 "name": "BaseBdev2", 00:10:39.735 "uuid": "ef5b6fe6-69af-46d4-a76e-5e293d95a570", 00:10:39.735 "is_configured": true, 00:10:39.735 "data_offset": 2048, 00:10:39.735 "data_size": 63488 00:10:39.735 }, 00:10:39.735 { 00:10:39.735 "name": "BaseBdev3", 00:10:39.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.735 "is_configured": false, 00:10:39.735 "data_offset": 0, 00:10:39.735 "data_size": 0 00:10:39.735 } 00:10:39.735 ] 00:10:39.735 }' 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.735 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.303 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.303 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.303 [2024-11-20 08:44:10.985524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.303 [2024-11-20 08:44:10.986012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.304 [2024-11-20 08:44:10.986051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:40.304 BaseBdev3 00:10:40.304 [2024-11-20 08:44:10.986415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:40.304 [2024-11-20 08:44:10.986621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.304 [2024-11-20 08:44:10.986645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.304 [2024-11-20 08:44:10.986831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.304 08:44:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.304 [ 00:10:40.304 { 00:10:40.304 "name": "BaseBdev3", 00:10:40.304 "aliases": [ 00:10:40.304 "e2b3b32f-0911-47c1-adee-dd57de2bb0a3" 00:10:40.304 ], 00:10:40.304 "product_name": "Malloc disk", 00:10:40.304 "block_size": 512, 00:10:40.304 "num_blocks": 65536, 00:10:40.304 "uuid": "e2b3b32f-0911-47c1-adee-dd57de2bb0a3", 00:10:40.304 "assigned_rate_limits": { 00:10:40.304 "rw_ios_per_sec": 0, 00:10:40.304 "rw_mbytes_per_sec": 0, 00:10:40.304 "r_mbytes_per_sec": 0, 00:10:40.304 "w_mbytes_per_sec": 0 00:10:40.304 }, 00:10:40.304 "claimed": true, 00:10:40.304 "claim_type": "exclusive_write", 00:10:40.304 "zoned": false, 00:10:40.304 "supported_io_types": { 00:10:40.304 "read": true, 00:10:40.304 "write": true, 00:10:40.304 "unmap": true, 00:10:40.304 "flush": true, 00:10:40.304 "reset": true, 00:10:40.304 "nvme_admin": false, 00:10:40.304 "nvme_io": false, 00:10:40.304 "nvme_io_md": false, 00:10:40.304 "write_zeroes": true, 00:10:40.304 "zcopy": true, 00:10:40.304 "get_zone_info": false, 00:10:40.304 "zone_management": false, 00:10:40.304 "zone_append": false, 00:10:40.304 "compare": false, 00:10:40.304 "compare_and_write": false, 00:10:40.304 "abort": true, 00:10:40.304 "seek_hole": false, 00:10:40.304 "seek_data": false, 00:10:40.304 "copy": true, 00:10:40.304 "nvme_iov_md": false 00:10:40.304 }, 00:10:40.304 "memory_domains": [ 00:10:40.304 { 00:10:40.304 "dma_device_id": "system", 00:10:40.304 "dma_device_type": 1 00:10:40.304 }, 00:10:40.304 { 00:10:40.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.304 "dma_device_type": 2 00:10:40.304 } 00:10:40.304 ], 00:10:40.304 "driver_specific": {} 00:10:40.304 } 00:10:40.304 ] 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.304 "name": "Existed_Raid", 00:10:40.304 "uuid": "11c56a11-90c6-459e-9c50-743d82d833a4", 00:10:40.304 "strip_size_kb": 64, 00:10:40.304 "state": "online", 00:10:40.304 "raid_level": "raid0", 00:10:40.304 "superblock": true, 00:10:40.304 "num_base_bdevs": 3, 00:10:40.304 "num_base_bdevs_discovered": 3, 00:10:40.304 "num_base_bdevs_operational": 3, 00:10:40.304 "base_bdevs_list": [ 00:10:40.304 { 00:10:40.304 "name": "BaseBdev1", 00:10:40.304 "uuid": "e6057a89-bc0a-4946-bb45-4c3d4e5bd396", 00:10:40.304 "is_configured": true, 00:10:40.304 "data_offset": 2048, 00:10:40.304 "data_size": 63488 00:10:40.304 }, 00:10:40.304 { 00:10:40.304 "name": "BaseBdev2", 00:10:40.304 "uuid": "ef5b6fe6-69af-46d4-a76e-5e293d95a570", 00:10:40.304 "is_configured": true, 00:10:40.304 "data_offset": 2048, 00:10:40.304 "data_size": 63488 00:10:40.304 }, 00:10:40.304 { 00:10:40.304 "name": "BaseBdev3", 00:10:40.304 "uuid": "e2b3b32f-0911-47c1-adee-dd57de2bb0a3", 00:10:40.304 "is_configured": true, 00:10:40.304 "data_offset": 2048, 00:10:40.304 "data_size": 63488 00:10:40.304 } 00:10:40.304 ] 00:10:40.304 }' 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.304 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.885 [2024-11-20 08:44:11.530132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.885 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.885 "name": "Existed_Raid", 00:10:40.885 "aliases": [ 00:10:40.885 "11c56a11-90c6-459e-9c50-743d82d833a4" 00:10:40.885 ], 00:10:40.885 "product_name": "Raid Volume", 00:10:40.885 "block_size": 512, 00:10:40.885 "num_blocks": 190464, 00:10:40.885 "uuid": "11c56a11-90c6-459e-9c50-743d82d833a4", 00:10:40.885 "assigned_rate_limits": { 00:10:40.885 "rw_ios_per_sec": 0, 00:10:40.885 "rw_mbytes_per_sec": 0, 00:10:40.885 "r_mbytes_per_sec": 0, 00:10:40.885 "w_mbytes_per_sec": 0 00:10:40.885 }, 00:10:40.885 "claimed": false, 00:10:40.885 "zoned": false, 00:10:40.885 "supported_io_types": { 00:10:40.885 "read": true, 00:10:40.885 "write": true, 00:10:40.885 "unmap": true, 00:10:40.885 "flush": true, 00:10:40.885 "reset": true, 00:10:40.885 "nvme_admin": false, 00:10:40.885 "nvme_io": false, 00:10:40.885 "nvme_io_md": false, 00:10:40.885 "write_zeroes": true, 00:10:40.885 "zcopy": false, 00:10:40.885 "get_zone_info": false, 00:10:40.885 "zone_management": false, 00:10:40.885 "zone_append": false, 00:10:40.885 "compare": false, 00:10:40.885 "compare_and_write": false, 00:10:40.885 "abort": false, 00:10:40.885 "seek_hole": false, 00:10:40.885 "seek_data": false, 00:10:40.885 "copy": false, 00:10:40.885 "nvme_iov_md": false 00:10:40.885 }, 00:10:40.885 "memory_domains": [ 00:10:40.885 { 00:10:40.885 "dma_device_id": "system", 00:10:40.885 "dma_device_type": 1 00:10:40.885 }, 00:10:40.885 { 00:10:40.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.885 "dma_device_type": 2 00:10:40.885 }, 00:10:40.885 { 00:10:40.885 "dma_device_id": "system", 00:10:40.886 "dma_device_type": 1 00:10:40.886 }, 00:10:40.886 { 00:10:40.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.886 "dma_device_type": 2 00:10:40.886 }, 00:10:40.886 { 00:10:40.886 "dma_device_id": "system", 00:10:40.886 "dma_device_type": 1 00:10:40.886 }, 00:10:40.886 { 00:10:40.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.886 "dma_device_type": 2 00:10:40.886 } 00:10:40.886 ], 00:10:40.886 "driver_specific": { 00:10:40.886 "raid": { 00:10:40.886 "uuid": "11c56a11-90c6-459e-9c50-743d82d833a4", 00:10:40.886 "strip_size_kb": 64, 00:10:40.886 "state": "online", 00:10:40.886 "raid_level": "raid0", 00:10:40.886 "superblock": true, 00:10:40.886 "num_base_bdevs": 3, 00:10:40.886 "num_base_bdevs_discovered": 3, 00:10:40.886 "num_base_bdevs_operational": 3, 00:10:40.886 "base_bdevs_list": [ 00:10:40.886 { 00:10:40.886 "name": "BaseBdev1", 00:10:40.886 "uuid": "e6057a89-bc0a-4946-bb45-4c3d4e5bd396", 00:10:40.886 "is_configured": true, 00:10:40.886 "data_offset": 2048, 00:10:40.886 "data_size": 63488 00:10:40.886 }, 00:10:40.886 { 00:10:40.886 "name": "BaseBdev2", 00:10:40.886 "uuid": "ef5b6fe6-69af-46d4-a76e-5e293d95a570", 00:10:40.886 "is_configured": true, 00:10:40.886 "data_offset": 2048, 00:10:40.886 "data_size": 63488 00:10:40.886 }, 00:10:40.886 { 00:10:40.886 "name": "BaseBdev3", 00:10:40.886 "uuid": "e2b3b32f-0911-47c1-adee-dd57de2bb0a3", 00:10:40.886 "is_configured": true, 00:10:40.886 "data_offset": 2048, 00:10:40.886 "data_size": 63488 00:10:40.886 } 00:10:40.886 ] 00:10:40.886 } 00:10:40.886 } 00:10:40.886 }' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.886 BaseBdev2 00:10:40.886 BaseBdev3' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.886 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.145 [2024-11-20 08:44:11.853862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.145 [2024-11-20 08:44:11.853896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.145 [2024-11-20 08:44:11.853964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.145 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.146 "name": "Existed_Raid", 00:10:41.146 "uuid": "11c56a11-90c6-459e-9c50-743d82d833a4", 00:10:41.146 "strip_size_kb": 64, 00:10:41.146 "state": "offline", 00:10:41.146 "raid_level": "raid0", 00:10:41.146 "superblock": true, 00:10:41.146 "num_base_bdevs": 3, 00:10:41.146 "num_base_bdevs_discovered": 2, 00:10:41.146 "num_base_bdevs_operational": 2, 00:10:41.146 "base_bdevs_list": [ 00:10:41.146 { 00:10:41.146 "name": null, 00:10:41.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.146 "is_configured": false, 00:10:41.146 "data_offset": 0, 00:10:41.146 "data_size": 63488 00:10:41.146 }, 00:10:41.146 { 00:10:41.146 "name": "BaseBdev2", 00:10:41.146 "uuid": "ef5b6fe6-69af-46d4-a76e-5e293d95a570", 00:10:41.146 "is_configured": true, 00:10:41.146 "data_offset": 2048, 00:10:41.146 "data_size": 63488 00:10:41.146 }, 00:10:41.146 { 00:10:41.146 "name": "BaseBdev3", 00:10:41.146 "uuid": "e2b3b32f-0911-47c1-adee-dd57de2bb0a3", 00:10:41.146 "is_configured": true, 00:10:41.146 "data_offset": 2048, 00:10:41.146 "data_size": 63488 00:10:41.146 } 00:10:41.146 ] 00:10:41.146 }' 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.146 08:44:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 [2024-11-20 08:44:12.519776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.714 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.973 [2024-11-20 08:44:12.662256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.973 [2024-11-20 08:44:12.662318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.973 BaseBdev2 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.973 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.974 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.233 [ 00:10:42.233 { 00:10:42.233 "name": "BaseBdev2", 00:10:42.233 "aliases": [ 00:10:42.233 "a5169b7c-73fc-4446-815a-e4f6965b8cb9" 00:10:42.233 ], 00:10:42.233 "product_name": "Malloc disk", 00:10:42.233 "block_size": 512, 00:10:42.233 "num_blocks": 65536, 00:10:42.233 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:42.233 "assigned_rate_limits": { 00:10:42.233 "rw_ios_per_sec": 0, 00:10:42.233 "rw_mbytes_per_sec": 0, 00:10:42.233 "r_mbytes_per_sec": 0, 00:10:42.233 "w_mbytes_per_sec": 0 00:10:42.233 }, 00:10:42.233 "claimed": false, 00:10:42.233 "zoned": false, 00:10:42.233 "supported_io_types": { 00:10:42.233 "read": true, 00:10:42.233 "write": true, 00:10:42.233 "unmap": true, 00:10:42.233 "flush": true, 00:10:42.233 "reset": true, 00:10:42.233 "nvme_admin": false, 00:10:42.233 "nvme_io": false, 00:10:42.233 "nvme_io_md": false, 00:10:42.233 "write_zeroes": true, 00:10:42.233 "zcopy": true, 00:10:42.233 "get_zone_info": false, 00:10:42.233 "zone_management": false, 00:10:42.233 "zone_append": false, 00:10:42.233 "compare": false, 00:10:42.233 "compare_and_write": false, 00:10:42.233 "abort": true, 00:10:42.233 "seek_hole": false, 00:10:42.233 "seek_data": false, 00:10:42.233 "copy": true, 00:10:42.233 "nvme_iov_md": false 00:10:42.233 }, 00:10:42.233 "memory_domains": [ 00:10:42.233 { 00:10:42.233 "dma_device_id": "system", 00:10:42.233 "dma_device_type": 1 00:10:42.233 }, 00:10:42.233 { 00:10:42.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.233 "dma_device_type": 2 00:10:42.233 } 00:10:42.233 ], 00:10:42.233 "driver_specific": {} 00:10:42.233 } 00:10:42.233 ] 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.233 BaseBdev3 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:42.233 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 [ 00:10:42.234 { 00:10:42.234 "name": "BaseBdev3", 00:10:42.234 "aliases": [ 00:10:42.234 "ced38980-0192-4804-8350-b1a2db80900b" 00:10:42.234 ], 00:10:42.234 "product_name": "Malloc disk", 00:10:42.234 "block_size": 512, 00:10:42.234 "num_blocks": 65536, 00:10:42.234 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:42.234 "assigned_rate_limits": { 00:10:42.234 "rw_ios_per_sec": 0, 00:10:42.234 "rw_mbytes_per_sec": 0, 00:10:42.234 "r_mbytes_per_sec": 0, 00:10:42.234 "w_mbytes_per_sec": 0 00:10:42.234 }, 00:10:42.234 "claimed": false, 00:10:42.234 "zoned": false, 00:10:42.234 "supported_io_types": { 00:10:42.234 "read": true, 00:10:42.234 "write": true, 00:10:42.234 "unmap": true, 00:10:42.234 "flush": true, 00:10:42.234 "reset": true, 00:10:42.234 "nvme_admin": false, 00:10:42.234 "nvme_io": false, 00:10:42.234 "nvme_io_md": false, 00:10:42.234 "write_zeroes": true, 00:10:42.234 "zcopy": true, 00:10:42.234 "get_zone_info": false, 00:10:42.234 "zone_management": false, 00:10:42.234 "zone_append": false, 00:10:42.234 "compare": false, 00:10:42.234 "compare_and_write": false, 00:10:42.234 "abort": true, 00:10:42.234 "seek_hole": false, 00:10:42.234 "seek_data": false, 00:10:42.234 "copy": true, 00:10:42.234 "nvme_iov_md": false 00:10:42.234 }, 00:10:42.234 "memory_domains": [ 00:10:42.234 { 00:10:42.234 "dma_device_id": "system", 00:10:42.234 "dma_device_type": 1 00:10:42.234 }, 00:10:42.234 { 00:10:42.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.234 "dma_device_type": 2 00:10:42.234 } 00:10:42.234 ], 00:10:42.234 "driver_specific": {} 00:10:42.234 } 00:10:42.234 ] 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 [2024-11-20 08:44:12.991459] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.234 [2024-11-20 08:44:12.991665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.234 [2024-11-20 08:44:12.991809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.234 [2024-11-20 08:44:12.994364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 08:44:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.234 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.234 "name": "Existed_Raid", 00:10:42.234 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:42.234 "strip_size_kb": 64, 00:10:42.234 "state": "configuring", 00:10:42.234 "raid_level": "raid0", 00:10:42.234 "superblock": true, 00:10:42.234 "num_base_bdevs": 3, 00:10:42.234 "num_base_bdevs_discovered": 2, 00:10:42.234 "num_base_bdevs_operational": 3, 00:10:42.234 "base_bdevs_list": [ 00:10:42.234 { 00:10:42.234 "name": "BaseBdev1", 00:10:42.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.234 "is_configured": false, 00:10:42.234 "data_offset": 0, 00:10:42.234 "data_size": 0 00:10:42.234 }, 00:10:42.234 { 00:10:42.234 "name": "BaseBdev2", 00:10:42.234 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:42.234 "is_configured": true, 00:10:42.234 "data_offset": 2048, 00:10:42.234 "data_size": 63488 00:10:42.234 }, 00:10:42.234 { 00:10:42.234 "name": "BaseBdev3", 00:10:42.234 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:42.234 "is_configured": true, 00:10:42.234 "data_offset": 2048, 00:10:42.234 "data_size": 63488 00:10:42.234 } 00:10:42.234 ] 00:10:42.234 }' 00:10:42.234 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.234 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.802 [2024-11-20 08:44:13.515596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.802 "name": "Existed_Raid", 00:10:42.802 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:42.802 "strip_size_kb": 64, 00:10:42.802 "state": "configuring", 00:10:42.802 "raid_level": "raid0", 00:10:42.802 "superblock": true, 00:10:42.802 "num_base_bdevs": 3, 00:10:42.802 "num_base_bdevs_discovered": 1, 00:10:42.802 "num_base_bdevs_operational": 3, 00:10:42.802 "base_bdevs_list": [ 00:10:42.802 { 00:10:42.802 "name": "BaseBdev1", 00:10:42.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.802 "is_configured": false, 00:10:42.802 "data_offset": 0, 00:10:42.802 "data_size": 0 00:10:42.802 }, 00:10:42.802 { 00:10:42.802 "name": null, 00:10:42.802 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:42.802 "is_configured": false, 00:10:42.802 "data_offset": 0, 00:10:42.802 "data_size": 63488 00:10:42.802 }, 00:10:42.802 { 00:10:42.802 "name": "BaseBdev3", 00:10:42.802 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:42.802 "is_configured": true, 00:10:42.802 "data_offset": 2048, 00:10:42.802 "data_size": 63488 00:10:42.802 } 00:10:42.802 ] 00:10:42.802 }' 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.802 08:44:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 [2024-11-20 08:44:14.097357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.370 BaseBdev1 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 [ 00:10:43.370 { 00:10:43.370 "name": "BaseBdev1", 00:10:43.370 "aliases": [ 00:10:43.370 "0c902d95-ce08-4a61-adfe-3f0923a8cde9" 00:10:43.370 ], 00:10:43.370 "product_name": "Malloc disk", 00:10:43.370 "block_size": 512, 00:10:43.370 "num_blocks": 65536, 00:10:43.370 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:43.370 "assigned_rate_limits": { 00:10:43.370 "rw_ios_per_sec": 0, 00:10:43.370 "rw_mbytes_per_sec": 0, 00:10:43.370 "r_mbytes_per_sec": 0, 00:10:43.370 "w_mbytes_per_sec": 0 00:10:43.370 }, 00:10:43.370 "claimed": true, 00:10:43.370 "claim_type": "exclusive_write", 00:10:43.370 "zoned": false, 00:10:43.370 "supported_io_types": { 00:10:43.370 "read": true, 00:10:43.370 "write": true, 00:10:43.370 "unmap": true, 00:10:43.370 "flush": true, 00:10:43.370 "reset": true, 00:10:43.370 "nvme_admin": false, 00:10:43.370 "nvme_io": false, 00:10:43.370 "nvme_io_md": false, 00:10:43.370 "write_zeroes": true, 00:10:43.370 "zcopy": true, 00:10:43.370 "get_zone_info": false, 00:10:43.370 "zone_management": false, 00:10:43.370 "zone_append": false, 00:10:43.370 "compare": false, 00:10:43.370 "compare_and_write": false, 00:10:43.370 "abort": true, 00:10:43.370 "seek_hole": false, 00:10:43.370 "seek_data": false, 00:10:43.370 "copy": true, 00:10:43.370 "nvme_iov_md": false 00:10:43.370 }, 00:10:43.370 "memory_domains": [ 00:10:43.370 { 00:10:43.370 "dma_device_id": "system", 00:10:43.370 "dma_device_type": 1 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.370 "dma_device_type": 2 00:10:43.370 } 00:10:43.370 ], 00:10:43.370 "driver_specific": {} 00:10:43.370 } 00:10:43.370 ] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.371 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.371 "name": "Existed_Raid", 00:10:43.371 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:43.371 "strip_size_kb": 64, 00:10:43.371 "state": "configuring", 00:10:43.371 "raid_level": "raid0", 00:10:43.371 "superblock": true, 00:10:43.371 "num_base_bdevs": 3, 00:10:43.371 "num_base_bdevs_discovered": 2, 00:10:43.371 "num_base_bdevs_operational": 3, 00:10:43.371 "base_bdevs_list": [ 00:10:43.371 { 00:10:43.371 "name": "BaseBdev1", 00:10:43.371 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.371 "data_size": 63488 00:10:43.371 }, 00:10:43.371 { 00:10:43.371 "name": null, 00:10:43.371 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:43.371 "is_configured": false, 00:10:43.371 "data_offset": 0, 00:10:43.371 "data_size": 63488 00:10:43.371 }, 00:10:43.371 { 00:10:43.371 "name": "BaseBdev3", 00:10:43.371 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.371 "data_size": 63488 00:10:43.371 } 00:10:43.371 ] 00:10:43.371 }' 00:10:43.371 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.371 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.938 [2024-11-20 08:44:14.677572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.938 "name": "Existed_Raid", 00:10:43.938 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:43.938 "strip_size_kb": 64, 00:10:43.938 "state": "configuring", 00:10:43.938 "raid_level": "raid0", 00:10:43.938 "superblock": true, 00:10:43.938 "num_base_bdevs": 3, 00:10:43.938 "num_base_bdevs_discovered": 1, 00:10:43.938 "num_base_bdevs_operational": 3, 00:10:43.938 "base_bdevs_list": [ 00:10:43.938 { 00:10:43.938 "name": "BaseBdev1", 00:10:43.938 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:43.938 "is_configured": true, 00:10:43.938 "data_offset": 2048, 00:10:43.938 "data_size": 63488 00:10:43.938 }, 00:10:43.938 { 00:10:43.938 "name": null, 00:10:43.938 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:43.938 "is_configured": false, 00:10:43.938 "data_offset": 0, 00:10:43.938 "data_size": 63488 00:10:43.938 }, 00:10:43.938 { 00:10:43.938 "name": null, 00:10:43.938 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:43.938 "is_configured": false, 00:10:43.938 "data_offset": 0, 00:10:43.938 "data_size": 63488 00:10:43.938 } 00:10:43.938 ] 00:10:43.938 }' 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.938 08:44:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.506 [2024-11-20 08:44:15.261765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.506 "name": "Existed_Raid", 00:10:44.506 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:44.506 "strip_size_kb": 64, 00:10:44.506 "state": "configuring", 00:10:44.506 "raid_level": "raid0", 00:10:44.506 "superblock": true, 00:10:44.506 "num_base_bdevs": 3, 00:10:44.506 "num_base_bdevs_discovered": 2, 00:10:44.506 "num_base_bdevs_operational": 3, 00:10:44.506 "base_bdevs_list": [ 00:10:44.506 { 00:10:44.506 "name": "BaseBdev1", 00:10:44.506 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:44.506 "is_configured": true, 00:10:44.506 "data_offset": 2048, 00:10:44.506 "data_size": 63488 00:10:44.506 }, 00:10:44.506 { 00:10:44.506 "name": null, 00:10:44.506 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:44.506 "is_configured": false, 00:10:44.506 "data_offset": 0, 00:10:44.506 "data_size": 63488 00:10:44.506 }, 00:10:44.506 { 00:10:44.506 "name": "BaseBdev3", 00:10:44.506 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:44.506 "is_configured": true, 00:10:44.506 "data_offset": 2048, 00:10:44.506 "data_size": 63488 00:10:44.506 } 00:10:44.506 ] 00:10:44.506 }' 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.506 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.074 [2024-11-20 08:44:15.869939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.074 08:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.333 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.333 "name": "Existed_Raid", 00:10:45.333 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:45.333 "strip_size_kb": 64, 00:10:45.333 "state": "configuring", 00:10:45.333 "raid_level": "raid0", 00:10:45.333 "superblock": true, 00:10:45.333 "num_base_bdevs": 3, 00:10:45.333 "num_base_bdevs_discovered": 1, 00:10:45.333 "num_base_bdevs_operational": 3, 00:10:45.333 "base_bdevs_list": [ 00:10:45.333 { 00:10:45.333 "name": null, 00:10:45.333 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:45.333 "is_configured": false, 00:10:45.333 "data_offset": 0, 00:10:45.333 "data_size": 63488 00:10:45.333 }, 00:10:45.333 { 00:10:45.333 "name": null, 00:10:45.333 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:45.333 "is_configured": false, 00:10:45.333 "data_offset": 0, 00:10:45.333 "data_size": 63488 00:10:45.333 }, 00:10:45.333 { 00:10:45.333 "name": "BaseBdev3", 00:10:45.333 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:45.333 "is_configured": true, 00:10:45.333 "data_offset": 2048, 00:10:45.333 "data_size": 63488 00:10:45.333 } 00:10:45.333 ] 00:10:45.333 }' 00:10:45.333 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.334 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.593 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.593 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.593 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.593 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.593 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.593 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.852 [2024-11-20 08:44:16.511109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.852 "name": "Existed_Raid", 00:10:45.852 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:45.852 "strip_size_kb": 64, 00:10:45.852 "state": "configuring", 00:10:45.852 "raid_level": "raid0", 00:10:45.852 "superblock": true, 00:10:45.852 "num_base_bdevs": 3, 00:10:45.852 "num_base_bdevs_discovered": 2, 00:10:45.852 "num_base_bdevs_operational": 3, 00:10:45.852 "base_bdevs_list": [ 00:10:45.852 { 00:10:45.852 "name": null, 00:10:45.852 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:45.852 "is_configured": false, 00:10:45.852 "data_offset": 0, 00:10:45.852 "data_size": 63488 00:10:45.852 }, 00:10:45.852 { 00:10:45.852 "name": "BaseBdev2", 00:10:45.852 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:45.852 "is_configured": true, 00:10:45.852 "data_offset": 2048, 00:10:45.852 "data_size": 63488 00:10:45.852 }, 00:10:45.852 { 00:10:45.852 "name": "BaseBdev3", 00:10:45.852 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:45.852 "is_configured": true, 00:10:45.852 "data_offset": 2048, 00:10:45.852 "data_size": 63488 00:10:45.852 } 00:10:45.852 ] 00:10:45.852 }' 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.852 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.134 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.134 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.134 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.134 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.134 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c902d95-ce08-4a61-adfe-3f0923a8cde9 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.394 [2024-11-20 08:44:17.154358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:46.394 [2024-11-20 08:44:17.154892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:46.394 [2024-11-20 08:44:17.154925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:46.394 NewBaseBdev 00:10:46.394 [2024-11-20 08:44:17.155305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:46.394 [2024-11-20 08:44:17.155519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:46.394 [2024-11-20 08:44:17.155543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:46.394 [2024-11-20 08:44:17.155734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.394 [ 00:10:46.394 { 00:10:46.394 "name": "NewBaseBdev", 00:10:46.394 "aliases": [ 00:10:46.394 "0c902d95-ce08-4a61-adfe-3f0923a8cde9" 00:10:46.394 ], 00:10:46.394 "product_name": "Malloc disk", 00:10:46.394 "block_size": 512, 00:10:46.394 "num_blocks": 65536, 00:10:46.394 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:46.394 "assigned_rate_limits": { 00:10:46.394 "rw_ios_per_sec": 0, 00:10:46.394 "rw_mbytes_per_sec": 0, 00:10:46.394 "r_mbytes_per_sec": 0, 00:10:46.394 "w_mbytes_per_sec": 0 00:10:46.394 }, 00:10:46.394 "claimed": true, 00:10:46.394 "claim_type": "exclusive_write", 00:10:46.394 "zoned": false, 00:10:46.394 "supported_io_types": { 00:10:46.394 "read": true, 00:10:46.394 "write": true, 00:10:46.394 "unmap": true, 00:10:46.394 "flush": true, 00:10:46.394 "reset": true, 00:10:46.394 "nvme_admin": false, 00:10:46.394 "nvme_io": false, 00:10:46.394 "nvme_io_md": false, 00:10:46.394 "write_zeroes": true, 00:10:46.394 "zcopy": true, 00:10:46.394 "get_zone_info": false, 00:10:46.394 "zone_management": false, 00:10:46.394 "zone_append": false, 00:10:46.394 "compare": false, 00:10:46.394 "compare_and_write": false, 00:10:46.394 "abort": true, 00:10:46.394 "seek_hole": false, 00:10:46.394 "seek_data": false, 00:10:46.394 "copy": true, 00:10:46.394 "nvme_iov_md": false 00:10:46.394 }, 00:10:46.394 "memory_domains": [ 00:10:46.394 { 00:10:46.394 "dma_device_id": "system", 00:10:46.394 "dma_device_type": 1 00:10:46.394 }, 00:10:46.394 { 00:10:46.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.394 "dma_device_type": 2 00:10:46.394 } 00:10:46.394 ], 00:10:46.394 "driver_specific": {} 00:10:46.394 } 00:10:46.394 ] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.394 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.395 "name": "Existed_Raid", 00:10:46.395 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:46.395 "strip_size_kb": 64, 00:10:46.395 "state": "online", 00:10:46.395 "raid_level": "raid0", 00:10:46.395 "superblock": true, 00:10:46.395 "num_base_bdevs": 3, 00:10:46.395 "num_base_bdevs_discovered": 3, 00:10:46.395 "num_base_bdevs_operational": 3, 00:10:46.395 "base_bdevs_list": [ 00:10:46.395 { 00:10:46.395 "name": "NewBaseBdev", 00:10:46.395 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:46.395 "is_configured": true, 00:10:46.395 "data_offset": 2048, 00:10:46.395 "data_size": 63488 00:10:46.395 }, 00:10:46.395 { 00:10:46.395 "name": "BaseBdev2", 00:10:46.395 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:46.395 "is_configured": true, 00:10:46.395 "data_offset": 2048, 00:10:46.395 "data_size": 63488 00:10:46.395 }, 00:10:46.395 { 00:10:46.395 "name": "BaseBdev3", 00:10:46.395 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:46.395 "is_configured": true, 00:10:46.395 "data_offset": 2048, 00:10:46.395 "data_size": 63488 00:10:46.395 } 00:10:46.395 ] 00:10:46.395 }' 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.395 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.964 [2024-11-20 08:44:17.718961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.964 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.964 "name": "Existed_Raid", 00:10:46.964 "aliases": [ 00:10:46.964 "1a62aa04-e9d6-40bd-80f8-07f371e5f547" 00:10:46.964 ], 00:10:46.964 "product_name": "Raid Volume", 00:10:46.964 "block_size": 512, 00:10:46.964 "num_blocks": 190464, 00:10:46.964 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:46.964 "assigned_rate_limits": { 00:10:46.965 "rw_ios_per_sec": 0, 00:10:46.965 "rw_mbytes_per_sec": 0, 00:10:46.965 "r_mbytes_per_sec": 0, 00:10:46.965 "w_mbytes_per_sec": 0 00:10:46.965 }, 00:10:46.965 "claimed": false, 00:10:46.965 "zoned": false, 00:10:46.965 "supported_io_types": { 00:10:46.965 "read": true, 00:10:46.965 "write": true, 00:10:46.965 "unmap": true, 00:10:46.965 "flush": true, 00:10:46.965 "reset": true, 00:10:46.965 "nvme_admin": false, 00:10:46.965 "nvme_io": false, 00:10:46.965 "nvme_io_md": false, 00:10:46.965 "write_zeroes": true, 00:10:46.965 "zcopy": false, 00:10:46.965 "get_zone_info": false, 00:10:46.965 "zone_management": false, 00:10:46.965 "zone_append": false, 00:10:46.965 "compare": false, 00:10:46.965 "compare_and_write": false, 00:10:46.965 "abort": false, 00:10:46.965 "seek_hole": false, 00:10:46.965 "seek_data": false, 00:10:46.965 "copy": false, 00:10:46.965 "nvme_iov_md": false 00:10:46.965 }, 00:10:46.965 "memory_domains": [ 00:10:46.965 { 00:10:46.965 "dma_device_id": "system", 00:10:46.965 "dma_device_type": 1 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.965 "dma_device_type": 2 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "dma_device_id": "system", 00:10:46.965 "dma_device_type": 1 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.965 "dma_device_type": 2 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "dma_device_id": "system", 00:10:46.965 "dma_device_type": 1 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.965 "dma_device_type": 2 00:10:46.965 } 00:10:46.965 ], 00:10:46.965 "driver_specific": { 00:10:46.965 "raid": { 00:10:46.965 "uuid": "1a62aa04-e9d6-40bd-80f8-07f371e5f547", 00:10:46.965 "strip_size_kb": 64, 00:10:46.965 "state": "online", 00:10:46.965 "raid_level": "raid0", 00:10:46.965 "superblock": true, 00:10:46.965 "num_base_bdevs": 3, 00:10:46.965 "num_base_bdevs_discovered": 3, 00:10:46.965 "num_base_bdevs_operational": 3, 00:10:46.965 "base_bdevs_list": [ 00:10:46.965 { 00:10:46.965 "name": "NewBaseBdev", 00:10:46.965 "uuid": "0c902d95-ce08-4a61-adfe-3f0923a8cde9", 00:10:46.965 "is_configured": true, 00:10:46.965 "data_offset": 2048, 00:10:46.965 "data_size": 63488 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "name": "BaseBdev2", 00:10:46.965 "uuid": "a5169b7c-73fc-4446-815a-e4f6965b8cb9", 00:10:46.965 "is_configured": true, 00:10:46.965 "data_offset": 2048, 00:10:46.965 "data_size": 63488 00:10:46.965 }, 00:10:46.965 { 00:10:46.965 "name": "BaseBdev3", 00:10:46.965 "uuid": "ced38980-0192-4804-8350-b1a2db80900b", 00:10:46.965 "is_configured": true, 00:10:46.965 "data_offset": 2048, 00:10:46.965 "data_size": 63488 00:10:46.965 } 00:10:46.965 ] 00:10:46.965 } 00:10:46.965 } 00:10:46.965 }' 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.965 BaseBdev2 00:10:46.965 BaseBdev3' 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.965 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.225 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.225 [2024-11-20 08:44:18.030669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.225 [2024-11-20 08:44:18.030703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.225 [2024-11-20 08:44:18.030788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.225 [2024-11-20 08:44:18.030856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.225 [2024-11-20 08:44:18.030876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64434 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64434 ']' 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64434 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64434 00:10:47.225 killing process with pid 64434 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64434' 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64434 00:10:47.225 [2024-11-20 08:44:18.072856] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.225 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64434 00:10:47.529 [2024-11-20 08:44:18.337730] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.466 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.466 00:10:48.466 real 0m11.768s 00:10:48.466 user 0m19.547s 00:10:48.466 sys 0m1.579s 00:10:48.466 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.466 ************************************ 00:10:48.466 END TEST raid_state_function_test_sb 00:10:48.466 ************************************ 00:10:48.466 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.727 08:44:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:48.727 08:44:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.727 08:44:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.727 08:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.727 ************************************ 00:10:48.727 START TEST raid_superblock_test 00:10:48.727 ************************************ 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65065 00:10:48.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65065 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65065 ']' 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.727 08:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.727 [2024-11-20 08:44:19.533954] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:48.727 [2024-11-20 08:44:19.534161] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65065 ] 00:10:48.986 [2024-11-20 08:44:19.721862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.986 [2024-11-20 08:44:19.874648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.245 [2024-11-20 08:44:20.093653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.245 [2024-11-20 08:44:20.093729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.813 malloc1 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.813 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.813 [2024-11-20 08:44:20.579544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.813 [2024-11-20 08:44:20.579843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.813 [2024-11-20 08:44:20.580019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:49.814 [2024-11-20 08:44:20.580158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.814 [2024-11-20 08:44:20.583125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.814 [2024-11-20 08:44:20.583331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.814 pt1 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.814 malloc2 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.814 [2024-11-20 08:44:20.635897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.814 [2024-11-20 08:44:20.636000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.814 [2024-11-20 08:44:20.636037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:49.814 [2024-11-20 08:44:20.636052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.814 [2024-11-20 08:44:20.639005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.814 [2024-11-20 08:44:20.639198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.814 pt2 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.814 malloc3 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.814 [2024-11-20 08:44:20.709243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.814 [2024-11-20 08:44:20.709476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.814 [2024-11-20 08:44:20.709526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:49.814 [2024-11-20 08:44:20.709544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.814 [2024-11-20 08:44:20.712468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.814 [2024-11-20 08:44:20.712641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.814 pt3 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.814 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.814 [2024-11-20 08:44:20.721451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.814 [2024-11-20 08:44:20.724124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.814 [2024-11-20 08:44:20.724290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.814 [2024-11-20 08:44:20.724550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:49.814 [2024-11-20 08:44:20.724602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:49.814 [2024-11-20 08:44:20.724989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:49.814 [2024-11-20 08:44:20.725247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:49.814 [2024-11-20 08:44:20.725266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:49.814 [2024-11-20 08:44:20.725560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.073 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.074 "name": "raid_bdev1", 00:10:50.074 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:50.074 "strip_size_kb": 64, 00:10:50.074 "state": "online", 00:10:50.074 "raid_level": "raid0", 00:10:50.074 "superblock": true, 00:10:50.074 "num_base_bdevs": 3, 00:10:50.074 "num_base_bdevs_discovered": 3, 00:10:50.074 "num_base_bdevs_operational": 3, 00:10:50.074 "base_bdevs_list": [ 00:10:50.074 { 00:10:50.074 "name": "pt1", 00:10:50.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.074 "is_configured": true, 00:10:50.074 "data_offset": 2048, 00:10:50.074 "data_size": 63488 00:10:50.074 }, 00:10:50.074 { 00:10:50.074 "name": "pt2", 00:10:50.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.074 "is_configured": true, 00:10:50.074 "data_offset": 2048, 00:10:50.074 "data_size": 63488 00:10:50.074 }, 00:10:50.074 { 00:10:50.074 "name": "pt3", 00:10:50.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.074 "is_configured": true, 00:10:50.074 "data_offset": 2048, 00:10:50.074 "data_size": 63488 00:10:50.074 } 00:10:50.074 ] 00:10:50.074 }' 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.074 08:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.332 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.333 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.333 [2024-11-20 08:44:21.233984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.592 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.592 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.592 "name": "raid_bdev1", 00:10:50.592 "aliases": [ 00:10:50.592 "2fefcd57-a13f-4e1e-98f2-704b30fac7d1" 00:10:50.592 ], 00:10:50.592 "product_name": "Raid Volume", 00:10:50.592 "block_size": 512, 00:10:50.592 "num_blocks": 190464, 00:10:50.592 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:50.592 "assigned_rate_limits": { 00:10:50.592 "rw_ios_per_sec": 0, 00:10:50.592 "rw_mbytes_per_sec": 0, 00:10:50.592 "r_mbytes_per_sec": 0, 00:10:50.592 "w_mbytes_per_sec": 0 00:10:50.592 }, 00:10:50.592 "claimed": false, 00:10:50.592 "zoned": false, 00:10:50.592 "supported_io_types": { 00:10:50.592 "read": true, 00:10:50.592 "write": true, 00:10:50.592 "unmap": true, 00:10:50.592 "flush": true, 00:10:50.592 "reset": true, 00:10:50.592 "nvme_admin": false, 00:10:50.592 "nvme_io": false, 00:10:50.592 "nvme_io_md": false, 00:10:50.592 "write_zeroes": true, 00:10:50.592 "zcopy": false, 00:10:50.592 "get_zone_info": false, 00:10:50.592 "zone_management": false, 00:10:50.592 "zone_append": false, 00:10:50.592 "compare": false, 00:10:50.592 "compare_and_write": false, 00:10:50.592 "abort": false, 00:10:50.592 "seek_hole": false, 00:10:50.592 "seek_data": false, 00:10:50.592 "copy": false, 00:10:50.592 "nvme_iov_md": false 00:10:50.592 }, 00:10:50.592 "memory_domains": [ 00:10:50.592 { 00:10:50.592 "dma_device_id": "system", 00:10:50.592 "dma_device_type": 1 00:10:50.592 }, 00:10:50.592 { 00:10:50.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.592 "dma_device_type": 2 00:10:50.592 }, 00:10:50.592 { 00:10:50.592 "dma_device_id": "system", 00:10:50.592 "dma_device_type": 1 00:10:50.592 }, 00:10:50.592 { 00:10:50.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.593 "dma_device_type": 2 00:10:50.593 }, 00:10:50.593 { 00:10:50.593 "dma_device_id": "system", 00:10:50.593 "dma_device_type": 1 00:10:50.593 }, 00:10:50.593 { 00:10:50.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.593 "dma_device_type": 2 00:10:50.593 } 00:10:50.593 ], 00:10:50.593 "driver_specific": { 00:10:50.593 "raid": { 00:10:50.593 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:50.593 "strip_size_kb": 64, 00:10:50.593 "state": "online", 00:10:50.593 "raid_level": "raid0", 00:10:50.593 "superblock": true, 00:10:50.593 "num_base_bdevs": 3, 00:10:50.593 "num_base_bdevs_discovered": 3, 00:10:50.593 "num_base_bdevs_operational": 3, 00:10:50.593 "base_bdevs_list": [ 00:10:50.593 { 00:10:50.593 "name": "pt1", 00:10:50.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.593 "is_configured": true, 00:10:50.593 "data_offset": 2048, 00:10:50.593 "data_size": 63488 00:10:50.593 }, 00:10:50.593 { 00:10:50.593 "name": "pt2", 00:10:50.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.593 "is_configured": true, 00:10:50.593 "data_offset": 2048, 00:10:50.593 "data_size": 63488 00:10:50.593 }, 00:10:50.593 { 00:10:50.593 "name": "pt3", 00:10:50.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.593 "is_configured": true, 00:10:50.593 "data_offset": 2048, 00:10:50.593 "data_size": 63488 00:10:50.593 } 00:10:50.593 ] 00:10:50.593 } 00:10:50.593 } 00:10:50.593 }' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:50.593 pt2 00:10:50.593 pt3' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.593 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.864 [2024-11-20 08:44:21.554071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2fefcd57-a13f-4e1e-98f2-704b30fac7d1 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2fefcd57-a13f-4e1e-98f2-704b30fac7d1 ']' 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.864 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 [2024-11-20 08:44:21.609709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.865 [2024-11-20 08:44:21.609879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.865 [2024-11-20 08:44:21.610011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.865 [2024-11-20 08:44:21.610095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.865 [2024-11-20 08:44:21.610111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.865 [2024-11-20 08:44:21.753802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:50.865 [2024-11-20 08:44:21.756394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:50.865 [2024-11-20 08:44:21.756621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:50.865 [2024-11-20 08:44:21.756711] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:50.865 [2024-11-20 08:44:21.756790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:50.865 [2024-11-20 08:44:21.756827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:50.865 [2024-11-20 08:44:21.756869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.865 [2024-11-20 08:44:21.756886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:50.865 request: 00:10:50.865 { 00:10:50.865 "name": "raid_bdev1", 00:10:50.865 "raid_level": "raid0", 00:10:50.865 "base_bdevs": [ 00:10:50.865 "malloc1", 00:10:50.865 "malloc2", 00:10:50.865 "malloc3" 00:10:50.865 ], 00:10:50.865 "strip_size_kb": 64, 00:10:50.865 "superblock": false, 00:10:50.865 "method": "bdev_raid_create", 00:10:50.865 "req_id": 1 00:10:50.865 } 00:10:50.865 Got JSON-RPC error response 00:10:50.865 response: 00:10:50.865 { 00:10:50.865 "code": -17, 00:10:50.865 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:50.865 } 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.865 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.149 [2024-11-20 08:44:21.841797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.149 [2024-11-20 08:44:21.841875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.149 [2024-11-20 08:44:21.841907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:51.149 [2024-11-20 08:44:21.841923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.149 [2024-11-20 08:44:21.844839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.149 [2024-11-20 08:44:21.844895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.149 [2024-11-20 08:44:21.845009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:51.149 [2024-11-20 08:44:21.845081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.149 pt1 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.149 "name": "raid_bdev1", 00:10:51.149 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:51.149 "strip_size_kb": 64, 00:10:51.149 "state": "configuring", 00:10:51.149 "raid_level": "raid0", 00:10:51.149 "superblock": true, 00:10:51.149 "num_base_bdevs": 3, 00:10:51.149 "num_base_bdevs_discovered": 1, 00:10:51.149 "num_base_bdevs_operational": 3, 00:10:51.149 "base_bdevs_list": [ 00:10:51.149 { 00:10:51.149 "name": "pt1", 00:10:51.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.149 "is_configured": true, 00:10:51.149 "data_offset": 2048, 00:10:51.149 "data_size": 63488 00:10:51.149 }, 00:10:51.149 { 00:10:51.149 "name": null, 00:10:51.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.149 "is_configured": false, 00:10:51.149 "data_offset": 2048, 00:10:51.149 "data_size": 63488 00:10:51.149 }, 00:10:51.149 { 00:10:51.149 "name": null, 00:10:51.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.149 "is_configured": false, 00:10:51.149 "data_offset": 2048, 00:10:51.149 "data_size": 63488 00:10:51.149 } 00:10:51.149 ] 00:10:51.149 }' 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.149 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.408 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:51.408 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.408 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.408 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.668 [2024-11-20 08:44:22.325961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.668 [2024-11-20 08:44:22.326042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.668 [2024-11-20 08:44:22.326077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:51.668 [2024-11-20 08:44:22.326093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.668 [2024-11-20 08:44:22.326709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.668 [2024-11-20 08:44:22.326742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.668 [2024-11-20 08:44:22.326858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.668 [2024-11-20 08:44:22.326900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.668 pt2 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.668 [2024-11-20 08:44:22.333953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.668 "name": "raid_bdev1", 00:10:51.668 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:51.668 "strip_size_kb": 64, 00:10:51.668 "state": "configuring", 00:10:51.668 "raid_level": "raid0", 00:10:51.668 "superblock": true, 00:10:51.668 "num_base_bdevs": 3, 00:10:51.668 "num_base_bdevs_discovered": 1, 00:10:51.668 "num_base_bdevs_operational": 3, 00:10:51.668 "base_bdevs_list": [ 00:10:51.668 { 00:10:51.668 "name": "pt1", 00:10:51.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.668 "is_configured": true, 00:10:51.668 "data_offset": 2048, 00:10:51.668 "data_size": 63488 00:10:51.668 }, 00:10:51.668 { 00:10:51.668 "name": null, 00:10:51.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.668 "is_configured": false, 00:10:51.668 "data_offset": 0, 00:10:51.668 "data_size": 63488 00:10:51.668 }, 00:10:51.668 { 00:10:51.668 "name": null, 00:10:51.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.668 "is_configured": false, 00:10:51.668 "data_offset": 2048, 00:10:51.668 "data_size": 63488 00:10:51.668 } 00:10:51.668 ] 00:10:51.668 }' 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.668 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.236 [2024-11-20 08:44:22.850070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.236 [2024-11-20 08:44:22.850171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.236 [2024-11-20 08:44:22.850201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:52.236 [2024-11-20 08:44:22.850218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.236 [2024-11-20 08:44:22.850776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.236 [2024-11-20 08:44:22.850814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.236 [2024-11-20 08:44:22.850917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.236 [2024-11-20 08:44:22.850956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.236 pt2 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.236 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.236 [2024-11-20 08:44:22.858046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:52.236 [2024-11-20 08:44:22.858107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.236 [2024-11-20 08:44:22.858130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:52.236 [2024-11-20 08:44:22.858164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.236 [2024-11-20 08:44:22.858659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.236 [2024-11-20 08:44:22.858711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:52.237 [2024-11-20 08:44:22.858802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:52.237 [2024-11-20 08:44:22.858839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:52.237 [2024-11-20 08:44:22.858995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:52.237 [2024-11-20 08:44:22.859016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:52.237 [2024-11-20 08:44:22.859347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.237 [2024-11-20 08:44:22.859542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:52.237 [2024-11-20 08:44:22.859571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:52.237 [2024-11-20 08:44:22.859755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.237 pt3 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.237 "name": "raid_bdev1", 00:10:52.237 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:52.237 "strip_size_kb": 64, 00:10:52.237 "state": "online", 00:10:52.237 "raid_level": "raid0", 00:10:52.237 "superblock": true, 00:10:52.237 "num_base_bdevs": 3, 00:10:52.237 "num_base_bdevs_discovered": 3, 00:10:52.237 "num_base_bdevs_operational": 3, 00:10:52.237 "base_bdevs_list": [ 00:10:52.237 { 00:10:52.237 "name": "pt1", 00:10:52.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.237 "is_configured": true, 00:10:52.237 "data_offset": 2048, 00:10:52.237 "data_size": 63488 00:10:52.237 }, 00:10:52.237 { 00:10:52.237 "name": "pt2", 00:10:52.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.237 "is_configured": true, 00:10:52.237 "data_offset": 2048, 00:10:52.237 "data_size": 63488 00:10:52.237 }, 00:10:52.237 { 00:10:52.237 "name": "pt3", 00:10:52.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.237 "is_configured": true, 00:10:52.237 "data_offset": 2048, 00:10:52.237 "data_size": 63488 00:10:52.237 } 00:10:52.237 ] 00:10:52.237 }' 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.237 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.496 [2024-11-20 08:44:23.386611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.496 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.756 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.756 "name": "raid_bdev1", 00:10:52.756 "aliases": [ 00:10:52.756 "2fefcd57-a13f-4e1e-98f2-704b30fac7d1" 00:10:52.756 ], 00:10:52.756 "product_name": "Raid Volume", 00:10:52.756 "block_size": 512, 00:10:52.756 "num_blocks": 190464, 00:10:52.756 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:52.756 "assigned_rate_limits": { 00:10:52.756 "rw_ios_per_sec": 0, 00:10:52.756 "rw_mbytes_per_sec": 0, 00:10:52.756 "r_mbytes_per_sec": 0, 00:10:52.756 "w_mbytes_per_sec": 0 00:10:52.756 }, 00:10:52.756 "claimed": false, 00:10:52.756 "zoned": false, 00:10:52.756 "supported_io_types": { 00:10:52.756 "read": true, 00:10:52.756 "write": true, 00:10:52.756 "unmap": true, 00:10:52.756 "flush": true, 00:10:52.756 "reset": true, 00:10:52.756 "nvme_admin": false, 00:10:52.756 "nvme_io": false, 00:10:52.756 "nvme_io_md": false, 00:10:52.756 "write_zeroes": true, 00:10:52.756 "zcopy": false, 00:10:52.756 "get_zone_info": false, 00:10:52.756 "zone_management": false, 00:10:52.756 "zone_append": false, 00:10:52.756 "compare": false, 00:10:52.756 "compare_and_write": false, 00:10:52.756 "abort": false, 00:10:52.756 "seek_hole": false, 00:10:52.756 "seek_data": false, 00:10:52.756 "copy": false, 00:10:52.756 "nvme_iov_md": false 00:10:52.756 }, 00:10:52.756 "memory_domains": [ 00:10:52.756 { 00:10:52.756 "dma_device_id": "system", 00:10:52.756 "dma_device_type": 1 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.756 "dma_device_type": 2 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "dma_device_id": "system", 00:10:52.756 "dma_device_type": 1 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.756 "dma_device_type": 2 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "dma_device_id": "system", 00:10:52.756 "dma_device_type": 1 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.756 "dma_device_type": 2 00:10:52.756 } 00:10:52.756 ], 00:10:52.756 "driver_specific": { 00:10:52.756 "raid": { 00:10:52.756 "uuid": "2fefcd57-a13f-4e1e-98f2-704b30fac7d1", 00:10:52.756 "strip_size_kb": 64, 00:10:52.756 "state": "online", 00:10:52.756 "raid_level": "raid0", 00:10:52.756 "superblock": true, 00:10:52.756 "num_base_bdevs": 3, 00:10:52.756 "num_base_bdevs_discovered": 3, 00:10:52.756 "num_base_bdevs_operational": 3, 00:10:52.756 "base_bdevs_list": [ 00:10:52.756 { 00:10:52.756 "name": "pt1", 00:10:52.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.756 "is_configured": true, 00:10:52.756 "data_offset": 2048, 00:10:52.756 "data_size": 63488 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "name": "pt2", 00:10:52.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.756 "is_configured": true, 00:10:52.756 "data_offset": 2048, 00:10:52.756 "data_size": 63488 00:10:52.756 }, 00:10:52.756 { 00:10:52.756 "name": "pt3", 00:10:52.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.756 "is_configured": true, 00:10:52.756 "data_offset": 2048, 00:10:52.756 "data_size": 63488 00:10:52.756 } 00:10:52.756 ] 00:10:52.756 } 00:10:52.756 } 00:10:52.756 }' 00:10:52.756 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.756 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.756 pt2 00:10:52.756 pt3' 00:10:52.756 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.757 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.016 [2024-11-20 08:44:23.710652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2fefcd57-a13f-4e1e-98f2-704b30fac7d1 '!=' 2fefcd57-a13f-4e1e-98f2-704b30fac7d1 ']' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65065 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65065 ']' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65065 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65065 00:10:53.016 killing process with pid 65065 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65065' 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65065 00:10:53.016 [2024-11-20 08:44:23.791186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.016 [2024-11-20 08:44:23.791308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.016 [2024-11-20 08:44:23.791388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.016 [2024-11-20 08:44:23.791408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:53.016 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65065 00:10:53.275 [2024-11-20 08:44:24.067836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.210 ************************************ 00:10:54.210 END TEST raid_superblock_test 00:10:54.210 ************************************ 00:10:54.210 08:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:54.210 00:10:54.210 real 0m5.679s 00:10:54.210 user 0m8.556s 00:10:54.210 sys 0m0.817s 00:10:54.210 08:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.210 08:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 08:44:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:54.470 08:44:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.470 08:44:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.470 08:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 ************************************ 00:10:54.470 START TEST raid_read_error_test 00:10:54.470 ************************************ 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.470 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5SIbtO78Jz 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65329 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65329 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65329 ']' 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.471 08:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.471 [2024-11-20 08:44:25.263729] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:54.471 [2024-11-20 08:44:25.263887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65329 ] 00:10:54.733 [2024-11-20 08:44:25.437600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.733 [2024-11-20 08:44:25.565288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.992 [2024-11-20 08:44:25.767086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.992 [2024-11-20 08:44:25.767369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.560 BaseBdev1_malloc 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.560 true 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.560 [2024-11-20 08:44:26.295465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.560 [2024-11-20 08:44:26.295535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.560 [2024-11-20 08:44:26.295577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.560 [2024-11-20 08:44:26.295598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.560 [2024-11-20 08:44:26.298372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.560 [2024-11-20 08:44:26.298434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.560 BaseBdev1 00:10:55.560 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 BaseBdev2_malloc 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 true 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 [2024-11-20 08:44:26.355176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.561 [2024-11-20 08:44:26.355398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.561 [2024-11-20 08:44:26.355439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.561 [2024-11-20 08:44:26.355460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.561 [2024-11-20 08:44:26.358281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.561 [2024-11-20 08:44:26.358331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.561 BaseBdev2 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 BaseBdev3_malloc 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 true 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 [2024-11-20 08:44:26.425099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.561 [2024-11-20 08:44:26.425192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.561 [2024-11-20 08:44:26.425226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.561 [2024-11-20 08:44:26.425246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.561 [2024-11-20 08:44:26.428080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.561 [2024-11-20 08:44:26.428134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.561 BaseBdev3 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 [2024-11-20 08:44:26.433219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.561 [2024-11-20 08:44:26.435640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.561 [2024-11-20 08:44:26.435755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.561 [2024-11-20 08:44:26.436031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.561 [2024-11-20 08:44:26.436052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:55.561 [2024-11-20 08:44:26.436424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:55.561 [2024-11-20 08:44:26.436648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.561 [2024-11-20 08:44:26.436691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:55.561 [2024-11-20 08:44:26.436889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.561 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.820 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.820 "name": "raid_bdev1", 00:10:55.820 "uuid": "7f4e7679-15ff-4991-9207-62e2a5ece678", 00:10:55.820 "strip_size_kb": 64, 00:10:55.820 "state": "online", 00:10:55.820 "raid_level": "raid0", 00:10:55.820 "superblock": true, 00:10:55.820 "num_base_bdevs": 3, 00:10:55.820 "num_base_bdevs_discovered": 3, 00:10:55.820 "num_base_bdevs_operational": 3, 00:10:55.820 "base_bdevs_list": [ 00:10:55.820 { 00:10:55.820 "name": "BaseBdev1", 00:10:55.820 "uuid": "f81c2bf9-22d6-555e-b020-71b994ecda9d", 00:10:55.820 "is_configured": true, 00:10:55.820 "data_offset": 2048, 00:10:55.820 "data_size": 63488 00:10:55.820 }, 00:10:55.820 { 00:10:55.820 "name": "BaseBdev2", 00:10:55.820 "uuid": "ef091d9d-6047-57e4-b2c8-3e127a2a8b7b", 00:10:55.820 "is_configured": true, 00:10:55.820 "data_offset": 2048, 00:10:55.820 "data_size": 63488 00:10:55.820 }, 00:10:55.820 { 00:10:55.820 "name": "BaseBdev3", 00:10:55.820 "uuid": "0b82b7fc-571a-5646-ab0e-1d762bae3aab", 00:10:55.820 "is_configured": true, 00:10:55.820 "data_offset": 2048, 00:10:55.820 "data_size": 63488 00:10:55.820 } 00:10:55.820 ] 00:10:55.820 }' 00:10:55.820 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.820 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.080 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.080 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.339 [2024-11-20 08:44:27.082894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.274 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.275 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.275 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.275 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.275 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.275 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.275 "name": "raid_bdev1", 00:10:57.275 "uuid": "7f4e7679-15ff-4991-9207-62e2a5ece678", 00:10:57.275 "strip_size_kb": 64, 00:10:57.275 "state": "online", 00:10:57.275 "raid_level": "raid0", 00:10:57.275 "superblock": true, 00:10:57.275 "num_base_bdevs": 3, 00:10:57.275 "num_base_bdevs_discovered": 3, 00:10:57.275 "num_base_bdevs_operational": 3, 00:10:57.275 "base_bdevs_list": [ 00:10:57.275 { 00:10:57.275 "name": "BaseBdev1", 00:10:57.275 "uuid": "f81c2bf9-22d6-555e-b020-71b994ecda9d", 00:10:57.275 "is_configured": true, 00:10:57.275 "data_offset": 2048, 00:10:57.275 "data_size": 63488 00:10:57.275 }, 00:10:57.275 { 00:10:57.275 "name": "BaseBdev2", 00:10:57.275 "uuid": "ef091d9d-6047-57e4-b2c8-3e127a2a8b7b", 00:10:57.275 "is_configured": true, 00:10:57.275 "data_offset": 2048, 00:10:57.275 "data_size": 63488 00:10:57.275 }, 00:10:57.275 { 00:10:57.275 "name": "BaseBdev3", 00:10:57.275 "uuid": "0b82b7fc-571a-5646-ab0e-1d762bae3aab", 00:10:57.275 "is_configured": true, 00:10:57.275 "data_offset": 2048, 00:10:57.275 "data_size": 63488 00:10:57.275 } 00:10:57.275 ] 00:10:57.275 }' 00:10:57.275 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.275 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.843 [2024-11-20 08:44:28.470612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.843 [2024-11-20 08:44:28.470655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.843 [2024-11-20 08:44:28.474055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.843 [2024-11-20 08:44:28.474114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.843 [2024-11-20 08:44:28.474166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.843 [2024-11-20 08:44:28.474180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:57.843 { 00:10:57.843 "results": [ 00:10:57.843 { 00:10:57.843 "job": "raid_bdev1", 00:10:57.843 "core_mask": "0x1", 00:10:57.843 "workload": "randrw", 00:10:57.843 "percentage": 50, 00:10:57.843 "status": "finished", 00:10:57.843 "queue_depth": 1, 00:10:57.843 "io_size": 131072, 00:10:57.843 "runtime": 1.385153, 00:10:57.843 "iops": 10505.698648452553, 00:10:57.843 "mibps": 1313.2123310565692, 00:10:57.843 "io_failed": 1, 00:10:57.843 "io_timeout": 0, 00:10:57.843 "avg_latency_us": 133.28417083637862, 00:10:57.843 "min_latency_us": 29.78909090909091, 00:10:57.843 "max_latency_us": 1824.581818181818 00:10:57.843 } 00:10:57.843 ], 00:10:57.843 "core_count": 1 00:10:57.843 } 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65329 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65329 ']' 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65329 00:10:57.843 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65329 00:10:57.844 killing process with pid 65329 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65329' 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65329 00:10:57.844 [2024-11-20 08:44:28.506628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.844 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65329 00:10:57.844 [2024-11-20 08:44:28.720735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5SIbtO78Jz 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:59.223 00:10:59.223 real 0m4.650s 00:10:59.223 user 0m5.781s 00:10:59.223 sys 0m0.551s 00:10:59.223 ************************************ 00:10:59.223 END TEST raid_read_error_test 00:10:59.223 ************************************ 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.223 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.223 08:44:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:59.223 08:44:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:59.223 08:44:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.223 08:44:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.223 ************************************ 00:10:59.223 START TEST raid_write_error_test 00:10:59.223 ************************************ 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:59.223 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aWQxuaG6hz 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65469 00:10:59.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65469 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65469 ']' 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.224 08:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.224 [2024-11-20 08:44:29.983502] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:10:59.224 [2024-11-20 08:44:29.983682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65469 ] 00:10:59.483 [2024-11-20 08:44:30.159795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.483 [2024-11-20 08:44:30.292791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.742 [2024-11-20 08:44:30.496832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.742 [2024-11-20 08:44:30.496880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.437 BaseBdev1_malloc 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.437 true 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.437 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.437 [2024-11-20 08:44:31.060950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:00.437 [2024-11-20 08:44:31.061038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.438 [2024-11-20 08:44:31.061070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:00.438 [2024-11-20 08:44:31.061089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.438 [2024-11-20 08:44:31.063929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.438 [2024-11-20 08:44:31.063984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.438 BaseBdev1 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 BaseBdev2_malloc 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 true 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 [2024-11-20 08:44:31.116978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.438 [2024-11-20 08:44:31.117052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.438 [2024-11-20 08:44:31.117080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:00.438 [2024-11-20 08:44:31.117097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.438 [2024-11-20 08:44:31.119981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.438 [2024-11-20 08:44:31.120034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.438 BaseBdev2 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 BaseBdev3_malloc 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 true 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 [2024-11-20 08:44:31.194855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:00.438 [2024-11-20 08:44:31.194954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.438 [2024-11-20 08:44:31.194983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:00.438 [2024-11-20 08:44:31.195000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.438 [2024-11-20 08:44:31.197932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.438 [2024-11-20 08:44:31.198122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:00.438 BaseBdev3 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 [2024-11-20 08:44:31.202977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.438 [2024-11-20 08:44:31.205453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.438 [2024-11-20 08:44:31.205575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.438 [2024-11-20 08:44:31.205821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.438 [2024-11-20 08:44:31.205841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.438 [2024-11-20 08:44:31.206197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:00.438 [2024-11-20 08:44:31.206413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.438 [2024-11-20 08:44:31.206436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:00.438 [2024-11-20 08:44:31.206623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.438 "name": "raid_bdev1", 00:11:00.438 "uuid": "7d092fc3-815c-4397-9859-eceb16d067d5", 00:11:00.438 "strip_size_kb": 64, 00:11:00.438 "state": "online", 00:11:00.438 "raid_level": "raid0", 00:11:00.438 "superblock": true, 00:11:00.438 "num_base_bdevs": 3, 00:11:00.438 "num_base_bdevs_discovered": 3, 00:11:00.438 "num_base_bdevs_operational": 3, 00:11:00.438 "base_bdevs_list": [ 00:11:00.438 { 00:11:00.438 "name": "BaseBdev1", 00:11:00.438 "uuid": "8d5c82b0-5f1a-503a-81a2-9aee9731345a", 00:11:00.438 "is_configured": true, 00:11:00.438 "data_offset": 2048, 00:11:00.438 "data_size": 63488 00:11:00.438 }, 00:11:00.438 { 00:11:00.438 "name": "BaseBdev2", 00:11:00.438 "uuid": "cd687fe3-b116-5909-8133-ad508bce90c3", 00:11:00.438 "is_configured": true, 00:11:00.438 "data_offset": 2048, 00:11:00.438 "data_size": 63488 00:11:00.438 }, 00:11:00.438 { 00:11:00.438 "name": "BaseBdev3", 00:11:00.438 "uuid": "7636c2d2-b199-5699-bc86-053b16157aac", 00:11:00.438 "is_configured": true, 00:11:00.438 "data_offset": 2048, 00:11:00.438 "data_size": 63488 00:11:00.438 } 00:11:00.438 ] 00:11:00.438 }' 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.438 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.005 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:01.005 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:01.005 [2024-11-20 08:44:31.856730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.941 "name": "raid_bdev1", 00:11:01.941 "uuid": "7d092fc3-815c-4397-9859-eceb16d067d5", 00:11:01.941 "strip_size_kb": 64, 00:11:01.941 "state": "online", 00:11:01.941 "raid_level": "raid0", 00:11:01.941 "superblock": true, 00:11:01.941 "num_base_bdevs": 3, 00:11:01.941 "num_base_bdevs_discovered": 3, 00:11:01.941 "num_base_bdevs_operational": 3, 00:11:01.941 "base_bdevs_list": [ 00:11:01.941 { 00:11:01.941 "name": "BaseBdev1", 00:11:01.941 "uuid": "8d5c82b0-5f1a-503a-81a2-9aee9731345a", 00:11:01.941 "is_configured": true, 00:11:01.941 "data_offset": 2048, 00:11:01.941 "data_size": 63488 00:11:01.941 }, 00:11:01.941 { 00:11:01.941 "name": "BaseBdev2", 00:11:01.941 "uuid": "cd687fe3-b116-5909-8133-ad508bce90c3", 00:11:01.941 "is_configured": true, 00:11:01.941 "data_offset": 2048, 00:11:01.941 "data_size": 63488 00:11:01.941 }, 00:11:01.941 { 00:11:01.941 "name": "BaseBdev3", 00:11:01.941 "uuid": "7636c2d2-b199-5699-bc86-053b16157aac", 00:11:01.941 "is_configured": true, 00:11:01.941 "data_offset": 2048, 00:11:01.941 "data_size": 63488 00:11:01.941 } 00:11:01.941 ] 00:11:01.941 }' 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.941 08:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.510 [2024-11-20 08:44:33.247100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.510 [2024-11-20 08:44:33.247319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.510 [2024-11-20 08:44:33.250903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.510 { 00:11:02.510 "results": [ 00:11:02.510 { 00:11:02.510 "job": "raid_bdev1", 00:11:02.510 "core_mask": "0x1", 00:11:02.510 "workload": "randrw", 00:11:02.510 "percentage": 50, 00:11:02.510 "status": "finished", 00:11:02.510 "queue_depth": 1, 00:11:02.510 "io_size": 131072, 00:11:02.510 "runtime": 1.388222, 00:11:02.510 "iops": 10689.932878170783, 00:11:02.510 "mibps": 1336.2416097713478, 00:11:02.510 "io_failed": 1, 00:11:02.510 "io_timeout": 0, 00:11:02.510 "avg_latency_us": 130.7178957556156, 00:11:02.510 "min_latency_us": 40.261818181818185, 00:11:02.510 "max_latency_us": 1891.6072727272726 00:11:02.510 } 00:11:02.510 ], 00:11:02.510 "core_count": 1 00:11:02.510 } 00:11:02.510 [2024-11-20 08:44:33.251115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.510 [2024-11-20 08:44:33.251202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.510 [2024-11-20 08:44:33.251221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65469 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65469 ']' 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65469 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65469 00:11:02.510 killing process with pid 65469 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65469' 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65469 00:11:02.510 [2024-11-20 08:44:33.284233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.510 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65469 00:11:02.769 [2024-11-20 08:44:33.493328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aWQxuaG6hz 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:03.705 ************************************ 00:11:03.705 END TEST raid_write_error_test 00:11:03.705 ************************************ 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:03.705 00:11:03.705 real 0m4.734s 00:11:03.705 user 0m5.890s 00:11:03.705 sys 0m0.592s 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.705 08:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.965 08:44:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:03.965 08:44:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:03.965 08:44:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.965 08:44:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.965 08:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.965 ************************************ 00:11:03.965 START TEST raid_state_function_test 00:11:03.965 ************************************ 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65613 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65613' 00:11:03.965 Process raid pid: 65613 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65613 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65613 ']' 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.965 08:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.965 [2024-11-20 08:44:34.754899] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:03.965 [2024-11-20 08:44:34.755085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.224 [2024-11-20 08:44:34.935691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.224 [2024-11-20 08:44:35.063388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.483 [2024-11-20 08:44:35.268130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.483 [2024-11-20 08:44:35.268198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.048 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.048 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:05.048 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:05.048 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.048 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.048 [2024-11-20 08:44:35.680976] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.048 [2024-11-20 08:44:35.681041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.049 [2024-11-20 08:44:35.681059] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.049 [2024-11-20 08:44:35.681076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.049 [2024-11-20 08:44:35.681086] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.049 [2024-11-20 08:44:35.681101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.049 "name": "Existed_Raid", 00:11:05.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.049 "strip_size_kb": 64, 00:11:05.049 "state": "configuring", 00:11:05.049 "raid_level": "concat", 00:11:05.049 "superblock": false, 00:11:05.049 "num_base_bdevs": 3, 00:11:05.049 "num_base_bdevs_discovered": 0, 00:11:05.049 "num_base_bdevs_operational": 3, 00:11:05.049 "base_bdevs_list": [ 00:11:05.049 { 00:11:05.049 "name": "BaseBdev1", 00:11:05.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.049 "is_configured": false, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 0 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "name": "BaseBdev2", 00:11:05.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.049 "is_configured": false, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 0 00:11:05.049 }, 00:11:05.049 { 00:11:05.049 "name": "BaseBdev3", 00:11:05.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.049 "is_configured": false, 00:11:05.049 "data_offset": 0, 00:11:05.049 "data_size": 0 00:11:05.049 } 00:11:05.049 ] 00:11:05.049 }' 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.049 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.307 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.307 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.307 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.307 [2024-11-20 08:44:36.217029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.307 [2024-11-20 08:44:36.217238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:05.307 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.567 [2024-11-20 08:44:36.229059] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.567 [2024-11-20 08:44:36.229288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.567 [2024-11-20 08:44:36.229418] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.567 [2024-11-20 08:44:36.229557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.567 [2024-11-20 08:44:36.229673] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.567 [2024-11-20 08:44:36.229740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.567 [2024-11-20 08:44:36.273836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.567 BaseBdev1 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.567 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.567 [ 00:11:05.567 { 00:11:05.567 "name": "BaseBdev1", 00:11:05.567 "aliases": [ 00:11:05.567 "b4fda023-8102-49cb-adbb-83d68192a636" 00:11:05.567 ], 00:11:05.567 "product_name": "Malloc disk", 00:11:05.567 "block_size": 512, 00:11:05.567 "num_blocks": 65536, 00:11:05.567 "uuid": "b4fda023-8102-49cb-adbb-83d68192a636", 00:11:05.567 "assigned_rate_limits": { 00:11:05.567 "rw_ios_per_sec": 0, 00:11:05.567 "rw_mbytes_per_sec": 0, 00:11:05.567 "r_mbytes_per_sec": 0, 00:11:05.567 "w_mbytes_per_sec": 0 00:11:05.567 }, 00:11:05.567 "claimed": true, 00:11:05.567 "claim_type": "exclusive_write", 00:11:05.567 "zoned": false, 00:11:05.567 "supported_io_types": { 00:11:05.567 "read": true, 00:11:05.567 "write": true, 00:11:05.567 "unmap": true, 00:11:05.567 "flush": true, 00:11:05.567 "reset": true, 00:11:05.567 "nvme_admin": false, 00:11:05.568 "nvme_io": false, 00:11:05.568 "nvme_io_md": false, 00:11:05.568 "write_zeroes": true, 00:11:05.568 "zcopy": true, 00:11:05.568 "get_zone_info": false, 00:11:05.568 "zone_management": false, 00:11:05.568 "zone_append": false, 00:11:05.568 "compare": false, 00:11:05.568 "compare_and_write": false, 00:11:05.568 "abort": true, 00:11:05.568 "seek_hole": false, 00:11:05.568 "seek_data": false, 00:11:05.568 "copy": true, 00:11:05.568 "nvme_iov_md": false 00:11:05.568 }, 00:11:05.568 "memory_domains": [ 00:11:05.568 { 00:11:05.568 "dma_device_id": "system", 00:11:05.568 "dma_device_type": 1 00:11:05.568 }, 00:11:05.568 { 00:11:05.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.568 "dma_device_type": 2 00:11:05.568 } 00:11:05.568 ], 00:11:05.568 "driver_specific": {} 00:11:05.568 } 00:11:05.568 ] 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.568 "name": "Existed_Raid", 00:11:05.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.568 "strip_size_kb": 64, 00:11:05.568 "state": "configuring", 00:11:05.568 "raid_level": "concat", 00:11:05.568 "superblock": false, 00:11:05.568 "num_base_bdevs": 3, 00:11:05.568 "num_base_bdevs_discovered": 1, 00:11:05.568 "num_base_bdevs_operational": 3, 00:11:05.568 "base_bdevs_list": [ 00:11:05.568 { 00:11:05.568 "name": "BaseBdev1", 00:11:05.568 "uuid": "b4fda023-8102-49cb-adbb-83d68192a636", 00:11:05.568 "is_configured": true, 00:11:05.568 "data_offset": 0, 00:11:05.568 "data_size": 65536 00:11:05.568 }, 00:11:05.568 { 00:11:05.568 "name": "BaseBdev2", 00:11:05.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.568 "is_configured": false, 00:11:05.568 "data_offset": 0, 00:11:05.568 "data_size": 0 00:11:05.568 }, 00:11:05.568 { 00:11:05.568 "name": "BaseBdev3", 00:11:05.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.568 "is_configured": false, 00:11:05.568 "data_offset": 0, 00:11:05.568 "data_size": 0 00:11:05.568 } 00:11:05.568 ] 00:11:05.568 }' 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.568 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.137 [2024-11-20 08:44:36.874061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.137 [2024-11-20 08:44:36.874143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.137 [2024-11-20 08:44:36.882132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.137 [2024-11-20 08:44:36.884677] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.137 [2024-11-20 08:44:36.884761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.137 [2024-11-20 08:44:36.884778] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.137 [2024-11-20 08:44:36.884794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.137 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.137 "name": "Existed_Raid", 00:11:06.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.137 "strip_size_kb": 64, 00:11:06.137 "state": "configuring", 00:11:06.137 "raid_level": "concat", 00:11:06.137 "superblock": false, 00:11:06.137 "num_base_bdevs": 3, 00:11:06.137 "num_base_bdevs_discovered": 1, 00:11:06.137 "num_base_bdevs_operational": 3, 00:11:06.137 "base_bdevs_list": [ 00:11:06.137 { 00:11:06.137 "name": "BaseBdev1", 00:11:06.137 "uuid": "b4fda023-8102-49cb-adbb-83d68192a636", 00:11:06.137 "is_configured": true, 00:11:06.137 "data_offset": 0, 00:11:06.137 "data_size": 65536 00:11:06.137 }, 00:11:06.137 { 00:11:06.137 "name": "BaseBdev2", 00:11:06.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.137 "is_configured": false, 00:11:06.137 "data_offset": 0, 00:11:06.137 "data_size": 0 00:11:06.137 }, 00:11:06.137 { 00:11:06.137 "name": "BaseBdev3", 00:11:06.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.137 "is_configured": false, 00:11:06.137 "data_offset": 0, 00:11:06.137 "data_size": 0 00:11:06.137 } 00:11:06.137 ] 00:11:06.138 }' 00:11:06.138 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.138 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 [2024-11-20 08:44:37.417340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.706 BaseBdev2 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.706 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 [ 00:11:06.706 { 00:11:06.706 "name": "BaseBdev2", 00:11:06.706 "aliases": [ 00:11:06.706 "0e8a60db-8747-412f-b67d-686897859d62" 00:11:06.706 ], 00:11:06.706 "product_name": "Malloc disk", 00:11:06.706 "block_size": 512, 00:11:06.706 "num_blocks": 65536, 00:11:06.706 "uuid": "0e8a60db-8747-412f-b67d-686897859d62", 00:11:06.706 "assigned_rate_limits": { 00:11:06.706 "rw_ios_per_sec": 0, 00:11:06.706 "rw_mbytes_per_sec": 0, 00:11:06.706 "r_mbytes_per_sec": 0, 00:11:06.706 "w_mbytes_per_sec": 0 00:11:06.706 }, 00:11:06.706 "claimed": true, 00:11:06.706 "claim_type": "exclusive_write", 00:11:06.706 "zoned": false, 00:11:06.706 "supported_io_types": { 00:11:06.706 "read": true, 00:11:06.706 "write": true, 00:11:06.706 "unmap": true, 00:11:06.706 "flush": true, 00:11:06.706 "reset": true, 00:11:06.706 "nvme_admin": false, 00:11:06.706 "nvme_io": false, 00:11:06.706 "nvme_io_md": false, 00:11:06.707 "write_zeroes": true, 00:11:06.707 "zcopy": true, 00:11:06.707 "get_zone_info": false, 00:11:06.707 "zone_management": false, 00:11:06.707 "zone_append": false, 00:11:06.707 "compare": false, 00:11:06.707 "compare_and_write": false, 00:11:06.707 "abort": true, 00:11:06.707 "seek_hole": false, 00:11:06.707 "seek_data": false, 00:11:06.707 "copy": true, 00:11:06.707 "nvme_iov_md": false 00:11:06.707 }, 00:11:06.707 "memory_domains": [ 00:11:06.707 { 00:11:06.707 "dma_device_id": "system", 00:11:06.707 "dma_device_type": 1 00:11:06.707 }, 00:11:06.707 { 00:11:06.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.707 "dma_device_type": 2 00:11:06.707 } 00:11:06.707 ], 00:11:06.707 "driver_specific": {} 00:11:06.707 } 00:11:06.707 ] 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.707 "name": "Existed_Raid", 00:11:06.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.707 "strip_size_kb": 64, 00:11:06.707 "state": "configuring", 00:11:06.707 "raid_level": "concat", 00:11:06.707 "superblock": false, 00:11:06.707 "num_base_bdevs": 3, 00:11:06.707 "num_base_bdevs_discovered": 2, 00:11:06.707 "num_base_bdevs_operational": 3, 00:11:06.707 "base_bdevs_list": [ 00:11:06.707 { 00:11:06.707 "name": "BaseBdev1", 00:11:06.707 "uuid": "b4fda023-8102-49cb-adbb-83d68192a636", 00:11:06.707 "is_configured": true, 00:11:06.707 "data_offset": 0, 00:11:06.707 "data_size": 65536 00:11:06.707 }, 00:11:06.707 { 00:11:06.707 "name": "BaseBdev2", 00:11:06.707 "uuid": "0e8a60db-8747-412f-b67d-686897859d62", 00:11:06.707 "is_configured": true, 00:11:06.707 "data_offset": 0, 00:11:06.707 "data_size": 65536 00:11:06.707 }, 00:11:06.707 { 00:11:06.707 "name": "BaseBdev3", 00:11:06.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.707 "is_configured": false, 00:11:06.707 "data_offset": 0, 00:11:06.707 "data_size": 0 00:11:06.707 } 00:11:06.707 ] 00:11:06.707 }' 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.707 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 [2024-11-20 08:44:37.986572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.334 [2024-11-20 08:44:37.986646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.334 [2024-11-20 08:44:37.986668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:07.334 [2024-11-20 08:44:37.987173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:07.334 [2024-11-20 08:44:37.987431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.334 [2024-11-20 08:44:37.987467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.334 [2024-11-20 08:44:37.987795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.334 BaseBdev3 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.334 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 [ 00:11:07.334 { 00:11:07.334 "name": "BaseBdev3", 00:11:07.334 "aliases": [ 00:11:07.334 "20f26049-ccf6-41ae-a3a9-b70232cf3927" 00:11:07.334 ], 00:11:07.334 "product_name": "Malloc disk", 00:11:07.334 "block_size": 512, 00:11:07.334 "num_blocks": 65536, 00:11:07.334 "uuid": "20f26049-ccf6-41ae-a3a9-b70232cf3927", 00:11:07.334 "assigned_rate_limits": { 00:11:07.334 "rw_ios_per_sec": 0, 00:11:07.334 "rw_mbytes_per_sec": 0, 00:11:07.334 "r_mbytes_per_sec": 0, 00:11:07.334 "w_mbytes_per_sec": 0 00:11:07.334 }, 00:11:07.334 "claimed": true, 00:11:07.334 "claim_type": "exclusive_write", 00:11:07.334 "zoned": false, 00:11:07.334 "supported_io_types": { 00:11:07.334 "read": true, 00:11:07.334 "write": true, 00:11:07.334 "unmap": true, 00:11:07.334 "flush": true, 00:11:07.334 "reset": true, 00:11:07.334 "nvme_admin": false, 00:11:07.334 "nvme_io": false, 00:11:07.334 "nvme_io_md": false, 00:11:07.334 "write_zeroes": true, 00:11:07.334 "zcopy": true, 00:11:07.334 "get_zone_info": false, 00:11:07.334 "zone_management": false, 00:11:07.334 "zone_append": false, 00:11:07.334 "compare": false, 00:11:07.334 "compare_and_write": false, 00:11:07.334 "abort": true, 00:11:07.334 "seek_hole": false, 00:11:07.334 "seek_data": false, 00:11:07.334 "copy": true, 00:11:07.334 "nvme_iov_md": false 00:11:07.334 }, 00:11:07.334 "memory_domains": [ 00:11:07.334 { 00:11:07.334 "dma_device_id": "system", 00:11:07.334 "dma_device_type": 1 00:11:07.334 }, 00:11:07.335 { 00:11:07.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.335 "dma_device_type": 2 00:11:07.335 } 00:11:07.335 ], 00:11:07.335 "driver_specific": {} 00:11:07.335 } 00:11:07.335 ] 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.335 "name": "Existed_Raid", 00:11:07.335 "uuid": "2937bc25-0bf9-4576-adc2-869263aa81cb", 00:11:07.335 "strip_size_kb": 64, 00:11:07.335 "state": "online", 00:11:07.335 "raid_level": "concat", 00:11:07.335 "superblock": false, 00:11:07.335 "num_base_bdevs": 3, 00:11:07.335 "num_base_bdevs_discovered": 3, 00:11:07.335 "num_base_bdevs_operational": 3, 00:11:07.335 "base_bdevs_list": [ 00:11:07.335 { 00:11:07.335 "name": "BaseBdev1", 00:11:07.335 "uuid": "b4fda023-8102-49cb-adbb-83d68192a636", 00:11:07.335 "is_configured": true, 00:11:07.335 "data_offset": 0, 00:11:07.335 "data_size": 65536 00:11:07.335 }, 00:11:07.335 { 00:11:07.335 "name": "BaseBdev2", 00:11:07.335 "uuid": "0e8a60db-8747-412f-b67d-686897859d62", 00:11:07.335 "is_configured": true, 00:11:07.335 "data_offset": 0, 00:11:07.335 "data_size": 65536 00:11:07.335 }, 00:11:07.335 { 00:11:07.335 "name": "BaseBdev3", 00:11:07.335 "uuid": "20f26049-ccf6-41ae-a3a9-b70232cf3927", 00:11:07.335 "is_configured": true, 00:11:07.335 "data_offset": 0, 00:11:07.335 "data_size": 65536 00:11:07.335 } 00:11:07.335 ] 00:11:07.335 }' 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.335 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 [2024-11-20 08:44:38.571220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.933 "name": "Existed_Raid", 00:11:07.933 "aliases": [ 00:11:07.933 "2937bc25-0bf9-4576-adc2-869263aa81cb" 00:11:07.933 ], 00:11:07.933 "product_name": "Raid Volume", 00:11:07.933 "block_size": 512, 00:11:07.933 "num_blocks": 196608, 00:11:07.933 "uuid": "2937bc25-0bf9-4576-adc2-869263aa81cb", 00:11:07.933 "assigned_rate_limits": { 00:11:07.933 "rw_ios_per_sec": 0, 00:11:07.933 "rw_mbytes_per_sec": 0, 00:11:07.933 "r_mbytes_per_sec": 0, 00:11:07.933 "w_mbytes_per_sec": 0 00:11:07.933 }, 00:11:07.933 "claimed": false, 00:11:07.933 "zoned": false, 00:11:07.933 "supported_io_types": { 00:11:07.933 "read": true, 00:11:07.933 "write": true, 00:11:07.933 "unmap": true, 00:11:07.933 "flush": true, 00:11:07.933 "reset": true, 00:11:07.933 "nvme_admin": false, 00:11:07.933 "nvme_io": false, 00:11:07.933 "nvme_io_md": false, 00:11:07.933 "write_zeroes": true, 00:11:07.933 "zcopy": false, 00:11:07.933 "get_zone_info": false, 00:11:07.933 "zone_management": false, 00:11:07.933 "zone_append": false, 00:11:07.933 "compare": false, 00:11:07.933 "compare_and_write": false, 00:11:07.933 "abort": false, 00:11:07.933 "seek_hole": false, 00:11:07.933 "seek_data": false, 00:11:07.933 "copy": false, 00:11:07.933 "nvme_iov_md": false 00:11:07.933 }, 00:11:07.933 "memory_domains": [ 00:11:07.933 { 00:11:07.933 "dma_device_id": "system", 00:11:07.933 "dma_device_type": 1 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.933 "dma_device_type": 2 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "dma_device_id": "system", 00:11:07.933 "dma_device_type": 1 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.933 "dma_device_type": 2 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "dma_device_id": "system", 00:11:07.933 "dma_device_type": 1 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.933 "dma_device_type": 2 00:11:07.933 } 00:11:07.933 ], 00:11:07.933 "driver_specific": { 00:11:07.933 "raid": { 00:11:07.933 "uuid": "2937bc25-0bf9-4576-adc2-869263aa81cb", 00:11:07.933 "strip_size_kb": 64, 00:11:07.933 "state": "online", 00:11:07.933 "raid_level": "concat", 00:11:07.933 "superblock": false, 00:11:07.933 "num_base_bdevs": 3, 00:11:07.933 "num_base_bdevs_discovered": 3, 00:11:07.933 "num_base_bdevs_operational": 3, 00:11:07.933 "base_bdevs_list": [ 00:11:07.933 { 00:11:07.933 "name": "BaseBdev1", 00:11:07.933 "uuid": "b4fda023-8102-49cb-adbb-83d68192a636", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 0, 00:11:07.933 "data_size": 65536 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "name": "BaseBdev2", 00:11:07.933 "uuid": "0e8a60db-8747-412f-b67d-686897859d62", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 0, 00:11:07.933 "data_size": 65536 00:11:07.933 }, 00:11:07.933 { 00:11:07.933 "name": "BaseBdev3", 00:11:07.933 "uuid": "20f26049-ccf6-41ae-a3a9-b70232cf3927", 00:11:07.933 "is_configured": true, 00:11:07.933 "data_offset": 0, 00:11:07.933 "data_size": 65536 00:11:07.933 } 00:11:07.933 ] 00:11:07.933 } 00:11:07.933 } 00:11:07.933 }' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.933 BaseBdev2 00:11:07.933 BaseBdev3' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.933 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.934 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.193 [2024-11-20 08:44:38.891004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.193 [2024-11-20 08:44:38.891042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.193 [2024-11-20 08:44:38.891128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.193 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.193 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.193 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.193 "name": "Existed_Raid", 00:11:08.193 "uuid": "2937bc25-0bf9-4576-adc2-869263aa81cb", 00:11:08.193 "strip_size_kb": 64, 00:11:08.193 "state": "offline", 00:11:08.193 "raid_level": "concat", 00:11:08.193 "superblock": false, 00:11:08.193 "num_base_bdevs": 3, 00:11:08.193 "num_base_bdevs_discovered": 2, 00:11:08.193 "num_base_bdevs_operational": 2, 00:11:08.193 "base_bdevs_list": [ 00:11:08.193 { 00:11:08.193 "name": null, 00:11:08.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.193 "is_configured": false, 00:11:08.193 "data_offset": 0, 00:11:08.193 "data_size": 65536 00:11:08.193 }, 00:11:08.193 { 00:11:08.193 "name": "BaseBdev2", 00:11:08.193 "uuid": "0e8a60db-8747-412f-b67d-686897859d62", 00:11:08.193 "is_configured": true, 00:11:08.193 "data_offset": 0, 00:11:08.193 "data_size": 65536 00:11:08.193 }, 00:11:08.193 { 00:11:08.193 "name": "BaseBdev3", 00:11:08.193 "uuid": "20f26049-ccf6-41ae-a3a9-b70232cf3927", 00:11:08.193 "is_configured": true, 00:11:08.193 "data_offset": 0, 00:11:08.193 "data_size": 65536 00:11:08.193 } 00:11:08.194 ] 00:11:08.194 }' 00:11:08.194 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.194 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.761 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.761 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.761 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.761 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.762 [2024-11-20 08:44:39.547841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.762 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.021 [2024-11-20 08:44:39.691915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.021 [2024-11-20 08:44:39.691992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:09.021 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.022 BaseBdev2 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.022 [ 00:11:09.022 { 00:11:09.022 "name": "BaseBdev2", 00:11:09.022 "aliases": [ 00:11:09.022 "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf" 00:11:09.022 ], 00:11:09.022 "product_name": "Malloc disk", 00:11:09.022 "block_size": 512, 00:11:09.022 "num_blocks": 65536, 00:11:09.022 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:09.022 "assigned_rate_limits": { 00:11:09.022 "rw_ios_per_sec": 0, 00:11:09.022 "rw_mbytes_per_sec": 0, 00:11:09.022 "r_mbytes_per_sec": 0, 00:11:09.022 "w_mbytes_per_sec": 0 00:11:09.022 }, 00:11:09.022 "claimed": false, 00:11:09.022 "zoned": false, 00:11:09.022 "supported_io_types": { 00:11:09.022 "read": true, 00:11:09.022 "write": true, 00:11:09.022 "unmap": true, 00:11:09.022 "flush": true, 00:11:09.022 "reset": true, 00:11:09.022 "nvme_admin": false, 00:11:09.022 "nvme_io": false, 00:11:09.022 "nvme_io_md": false, 00:11:09.022 "write_zeroes": true, 00:11:09.022 "zcopy": true, 00:11:09.022 "get_zone_info": false, 00:11:09.022 "zone_management": false, 00:11:09.022 "zone_append": false, 00:11:09.022 "compare": false, 00:11:09.022 "compare_and_write": false, 00:11:09.022 "abort": true, 00:11:09.022 "seek_hole": false, 00:11:09.022 "seek_data": false, 00:11:09.022 "copy": true, 00:11:09.022 "nvme_iov_md": false 00:11:09.022 }, 00:11:09.022 "memory_domains": [ 00:11:09.022 { 00:11:09.022 "dma_device_id": "system", 00:11:09.022 "dma_device_type": 1 00:11:09.022 }, 00:11:09.022 { 00:11:09.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.022 "dma_device_type": 2 00:11:09.022 } 00:11:09.022 ], 00:11:09.022 "driver_specific": {} 00:11:09.022 } 00:11:09.022 ] 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.022 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.281 BaseBdev3 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.281 [ 00:11:09.281 { 00:11:09.281 "name": "BaseBdev3", 00:11:09.281 "aliases": [ 00:11:09.281 "f6333a2a-4b96-40ad-aa21-591182f27687" 00:11:09.281 ], 00:11:09.281 "product_name": "Malloc disk", 00:11:09.281 "block_size": 512, 00:11:09.281 "num_blocks": 65536, 00:11:09.281 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:09.281 "assigned_rate_limits": { 00:11:09.281 "rw_ios_per_sec": 0, 00:11:09.281 "rw_mbytes_per_sec": 0, 00:11:09.281 "r_mbytes_per_sec": 0, 00:11:09.281 "w_mbytes_per_sec": 0 00:11:09.281 }, 00:11:09.281 "claimed": false, 00:11:09.281 "zoned": false, 00:11:09.281 "supported_io_types": { 00:11:09.281 "read": true, 00:11:09.281 "write": true, 00:11:09.281 "unmap": true, 00:11:09.281 "flush": true, 00:11:09.281 "reset": true, 00:11:09.281 "nvme_admin": false, 00:11:09.281 "nvme_io": false, 00:11:09.281 "nvme_io_md": false, 00:11:09.281 "write_zeroes": true, 00:11:09.281 "zcopy": true, 00:11:09.281 "get_zone_info": false, 00:11:09.281 "zone_management": false, 00:11:09.281 "zone_append": false, 00:11:09.281 "compare": false, 00:11:09.281 "compare_and_write": false, 00:11:09.281 "abort": true, 00:11:09.281 "seek_hole": false, 00:11:09.281 "seek_data": false, 00:11:09.281 "copy": true, 00:11:09.281 "nvme_iov_md": false 00:11:09.281 }, 00:11:09.281 "memory_domains": [ 00:11:09.281 { 00:11:09.281 "dma_device_id": "system", 00:11:09.281 "dma_device_type": 1 00:11:09.281 }, 00:11:09.281 { 00:11:09.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.281 "dma_device_type": 2 00:11:09.281 } 00:11:09.281 ], 00:11:09.281 "driver_specific": {} 00:11:09.281 } 00:11:09.281 ] 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.281 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.281 [2024-11-20 08:44:39.967829] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.282 [2024-11-20 08:44:39.967883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.282 [2024-11-20 08:44:39.967946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.282 [2024-11-20 08:44:39.970384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.282 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.282 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.282 "name": "Existed_Raid", 00:11:09.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.282 "strip_size_kb": 64, 00:11:09.282 "state": "configuring", 00:11:09.282 "raid_level": "concat", 00:11:09.282 "superblock": false, 00:11:09.282 "num_base_bdevs": 3, 00:11:09.282 "num_base_bdevs_discovered": 2, 00:11:09.282 "num_base_bdevs_operational": 3, 00:11:09.282 "base_bdevs_list": [ 00:11:09.282 { 00:11:09.282 "name": "BaseBdev1", 00:11:09.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.282 "is_configured": false, 00:11:09.282 "data_offset": 0, 00:11:09.282 "data_size": 0 00:11:09.282 }, 00:11:09.282 { 00:11:09.282 "name": "BaseBdev2", 00:11:09.282 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:09.282 "is_configured": true, 00:11:09.282 "data_offset": 0, 00:11:09.282 "data_size": 65536 00:11:09.282 }, 00:11:09.282 { 00:11:09.282 "name": "BaseBdev3", 00:11:09.282 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:09.282 "is_configured": true, 00:11:09.282 "data_offset": 0, 00:11:09.282 "data_size": 65536 00:11:09.282 } 00:11:09.282 ] 00:11:09.282 }' 00:11:09.282 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.282 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.848 [2024-11-20 08:44:40.483984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.848 "name": "Existed_Raid", 00:11:09.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.848 "strip_size_kb": 64, 00:11:09.848 "state": "configuring", 00:11:09.848 "raid_level": "concat", 00:11:09.848 "superblock": false, 00:11:09.848 "num_base_bdevs": 3, 00:11:09.848 "num_base_bdevs_discovered": 1, 00:11:09.848 "num_base_bdevs_operational": 3, 00:11:09.848 "base_bdevs_list": [ 00:11:09.848 { 00:11:09.848 "name": "BaseBdev1", 00:11:09.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.848 "is_configured": false, 00:11:09.848 "data_offset": 0, 00:11:09.848 "data_size": 0 00:11:09.848 }, 00:11:09.848 { 00:11:09.848 "name": null, 00:11:09.848 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:09.848 "is_configured": false, 00:11:09.848 "data_offset": 0, 00:11:09.848 "data_size": 65536 00:11:09.848 }, 00:11:09.848 { 00:11:09.848 "name": "BaseBdev3", 00:11:09.848 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:09.848 "is_configured": true, 00:11:09.848 "data_offset": 0, 00:11:09.848 "data_size": 65536 00:11:09.848 } 00:11:09.848 ] 00:11:09.848 }' 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.848 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.107 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.107 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.107 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.366 [2024-11-20 08:44:41.106648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.366 BaseBdev1 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.366 [ 00:11:10.366 { 00:11:10.366 "name": "BaseBdev1", 00:11:10.366 "aliases": [ 00:11:10.366 "6ca9afa2-7362-490b-a0ce-f990080ee955" 00:11:10.366 ], 00:11:10.366 "product_name": "Malloc disk", 00:11:10.366 "block_size": 512, 00:11:10.366 "num_blocks": 65536, 00:11:10.366 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:10.366 "assigned_rate_limits": { 00:11:10.366 "rw_ios_per_sec": 0, 00:11:10.366 "rw_mbytes_per_sec": 0, 00:11:10.366 "r_mbytes_per_sec": 0, 00:11:10.366 "w_mbytes_per_sec": 0 00:11:10.366 }, 00:11:10.366 "claimed": true, 00:11:10.366 "claim_type": "exclusive_write", 00:11:10.366 "zoned": false, 00:11:10.366 "supported_io_types": { 00:11:10.366 "read": true, 00:11:10.366 "write": true, 00:11:10.366 "unmap": true, 00:11:10.366 "flush": true, 00:11:10.366 "reset": true, 00:11:10.366 "nvme_admin": false, 00:11:10.366 "nvme_io": false, 00:11:10.366 "nvme_io_md": false, 00:11:10.366 "write_zeroes": true, 00:11:10.366 "zcopy": true, 00:11:10.366 "get_zone_info": false, 00:11:10.366 "zone_management": false, 00:11:10.366 "zone_append": false, 00:11:10.366 "compare": false, 00:11:10.366 "compare_and_write": false, 00:11:10.366 "abort": true, 00:11:10.366 "seek_hole": false, 00:11:10.366 "seek_data": false, 00:11:10.366 "copy": true, 00:11:10.366 "nvme_iov_md": false 00:11:10.366 }, 00:11:10.366 "memory_domains": [ 00:11:10.366 { 00:11:10.366 "dma_device_id": "system", 00:11:10.366 "dma_device_type": 1 00:11:10.366 }, 00:11:10.366 { 00:11:10.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.366 "dma_device_type": 2 00:11:10.366 } 00:11:10.366 ], 00:11:10.366 "driver_specific": {} 00:11:10.366 } 00:11:10.366 ] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.366 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.366 "name": "Existed_Raid", 00:11:10.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.366 "strip_size_kb": 64, 00:11:10.366 "state": "configuring", 00:11:10.366 "raid_level": "concat", 00:11:10.366 "superblock": false, 00:11:10.366 "num_base_bdevs": 3, 00:11:10.366 "num_base_bdevs_discovered": 2, 00:11:10.366 "num_base_bdevs_operational": 3, 00:11:10.366 "base_bdevs_list": [ 00:11:10.366 { 00:11:10.366 "name": "BaseBdev1", 00:11:10.366 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:10.366 "is_configured": true, 00:11:10.366 "data_offset": 0, 00:11:10.366 "data_size": 65536 00:11:10.366 }, 00:11:10.366 { 00:11:10.366 "name": null, 00:11:10.366 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:10.366 "is_configured": false, 00:11:10.366 "data_offset": 0, 00:11:10.366 "data_size": 65536 00:11:10.366 }, 00:11:10.366 { 00:11:10.366 "name": "BaseBdev3", 00:11:10.366 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:10.366 "is_configured": true, 00:11:10.366 "data_offset": 0, 00:11:10.366 "data_size": 65536 00:11:10.366 } 00:11:10.366 ] 00:11:10.366 }' 00:11:10.367 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.367 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 [2024-11-20 08:44:41.782933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.936 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.936 "name": "Existed_Raid", 00:11:10.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.936 "strip_size_kb": 64, 00:11:10.936 "state": "configuring", 00:11:10.936 "raid_level": "concat", 00:11:10.936 "superblock": false, 00:11:10.936 "num_base_bdevs": 3, 00:11:10.936 "num_base_bdevs_discovered": 1, 00:11:10.936 "num_base_bdevs_operational": 3, 00:11:10.936 "base_bdevs_list": [ 00:11:10.936 { 00:11:10.936 "name": "BaseBdev1", 00:11:10.936 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:10.936 "is_configured": true, 00:11:10.936 "data_offset": 0, 00:11:10.936 "data_size": 65536 00:11:10.936 }, 00:11:10.936 { 00:11:10.936 "name": null, 00:11:10.936 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:10.936 "is_configured": false, 00:11:10.936 "data_offset": 0, 00:11:10.936 "data_size": 65536 00:11:10.936 }, 00:11:10.936 { 00:11:10.936 "name": null, 00:11:10.936 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:10.936 "is_configured": false, 00:11:10.936 "data_offset": 0, 00:11:10.936 "data_size": 65536 00:11:10.936 } 00:11:10.936 ] 00:11:10.937 }' 00:11:10.937 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.937 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 [2024-11-20 08:44:42.355123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.506 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.506 "name": "Existed_Raid", 00:11:11.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.506 "strip_size_kb": 64, 00:11:11.506 "state": "configuring", 00:11:11.506 "raid_level": "concat", 00:11:11.506 "superblock": false, 00:11:11.506 "num_base_bdevs": 3, 00:11:11.506 "num_base_bdevs_discovered": 2, 00:11:11.506 "num_base_bdevs_operational": 3, 00:11:11.506 "base_bdevs_list": [ 00:11:11.506 { 00:11:11.506 "name": "BaseBdev1", 00:11:11.506 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:11.506 "is_configured": true, 00:11:11.506 "data_offset": 0, 00:11:11.506 "data_size": 65536 00:11:11.506 }, 00:11:11.506 { 00:11:11.506 "name": null, 00:11:11.506 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:11.506 "is_configured": false, 00:11:11.506 "data_offset": 0, 00:11:11.506 "data_size": 65536 00:11:11.507 }, 00:11:11.507 { 00:11:11.507 "name": "BaseBdev3", 00:11:11.507 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:11.507 "is_configured": true, 00:11:11.507 "data_offset": 0, 00:11:11.507 "data_size": 65536 00:11:11.507 } 00:11:11.507 ] 00:11:11.507 }' 00:11:11.507 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.507 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.075 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.075 [2024-11-20 08:44:42.919298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.335 "name": "Existed_Raid", 00:11:12.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.335 "strip_size_kb": 64, 00:11:12.335 "state": "configuring", 00:11:12.335 "raid_level": "concat", 00:11:12.335 "superblock": false, 00:11:12.335 "num_base_bdevs": 3, 00:11:12.335 "num_base_bdevs_discovered": 1, 00:11:12.335 "num_base_bdevs_operational": 3, 00:11:12.335 "base_bdevs_list": [ 00:11:12.335 { 00:11:12.335 "name": null, 00:11:12.335 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:12.335 "is_configured": false, 00:11:12.335 "data_offset": 0, 00:11:12.335 "data_size": 65536 00:11:12.335 }, 00:11:12.335 { 00:11:12.335 "name": null, 00:11:12.335 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:12.335 "is_configured": false, 00:11:12.335 "data_offset": 0, 00:11:12.335 "data_size": 65536 00:11:12.335 }, 00:11:12.335 { 00:11:12.335 "name": "BaseBdev3", 00:11:12.335 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:12.335 "is_configured": true, 00:11:12.335 "data_offset": 0, 00:11:12.335 "data_size": 65536 00:11:12.335 } 00:11:12.335 ] 00:11:12.335 }' 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.335 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.903 [2024-11-20 08:44:43.572764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.903 "name": "Existed_Raid", 00:11:12.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.903 "strip_size_kb": 64, 00:11:12.903 "state": "configuring", 00:11:12.903 "raid_level": "concat", 00:11:12.903 "superblock": false, 00:11:12.903 "num_base_bdevs": 3, 00:11:12.903 "num_base_bdevs_discovered": 2, 00:11:12.903 "num_base_bdevs_operational": 3, 00:11:12.903 "base_bdevs_list": [ 00:11:12.903 { 00:11:12.903 "name": null, 00:11:12.903 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:12.903 "is_configured": false, 00:11:12.903 "data_offset": 0, 00:11:12.903 "data_size": 65536 00:11:12.903 }, 00:11:12.903 { 00:11:12.903 "name": "BaseBdev2", 00:11:12.903 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:12.903 "is_configured": true, 00:11:12.903 "data_offset": 0, 00:11:12.903 "data_size": 65536 00:11:12.903 }, 00:11:12.903 { 00:11:12.903 "name": "BaseBdev3", 00:11:12.903 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:12.903 "is_configured": true, 00:11:12.903 "data_offset": 0, 00:11:12.903 "data_size": 65536 00:11:12.903 } 00:11:12.903 ] 00:11:12.903 }' 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.903 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ca9afa2-7362-490b-a0ce-f990080ee955 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 [2024-11-20 08:44:44.270838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:13.472 [2024-11-20 08:44:44.270913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:13.472 [2024-11-20 08:44:44.270929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:13.472 [2024-11-20 08:44:44.271291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:13.472 [2024-11-20 08:44:44.271489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:13.472 [2024-11-20 08:44:44.271517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:13.472 [2024-11-20 08:44:44.271831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.472 NewBaseBdev 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 [ 00:11:13.472 { 00:11:13.472 "name": "NewBaseBdev", 00:11:13.472 "aliases": [ 00:11:13.472 "6ca9afa2-7362-490b-a0ce-f990080ee955" 00:11:13.472 ], 00:11:13.472 "product_name": "Malloc disk", 00:11:13.472 "block_size": 512, 00:11:13.472 "num_blocks": 65536, 00:11:13.472 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:13.472 "assigned_rate_limits": { 00:11:13.472 "rw_ios_per_sec": 0, 00:11:13.472 "rw_mbytes_per_sec": 0, 00:11:13.472 "r_mbytes_per_sec": 0, 00:11:13.472 "w_mbytes_per_sec": 0 00:11:13.472 }, 00:11:13.472 "claimed": true, 00:11:13.472 "claim_type": "exclusive_write", 00:11:13.472 "zoned": false, 00:11:13.472 "supported_io_types": { 00:11:13.472 "read": true, 00:11:13.472 "write": true, 00:11:13.472 "unmap": true, 00:11:13.472 "flush": true, 00:11:13.472 "reset": true, 00:11:13.472 "nvme_admin": false, 00:11:13.472 "nvme_io": false, 00:11:13.472 "nvme_io_md": false, 00:11:13.472 "write_zeroes": true, 00:11:13.472 "zcopy": true, 00:11:13.472 "get_zone_info": false, 00:11:13.472 "zone_management": false, 00:11:13.472 "zone_append": false, 00:11:13.472 "compare": false, 00:11:13.472 "compare_and_write": false, 00:11:13.472 "abort": true, 00:11:13.472 "seek_hole": false, 00:11:13.472 "seek_data": false, 00:11:13.472 "copy": true, 00:11:13.472 "nvme_iov_md": false 00:11:13.472 }, 00:11:13.472 "memory_domains": [ 00:11:13.472 { 00:11:13.472 "dma_device_id": "system", 00:11:13.472 "dma_device_type": 1 00:11:13.472 }, 00:11:13.472 { 00:11:13.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.472 "dma_device_type": 2 00:11:13.472 } 00:11:13.472 ], 00:11:13.472 "driver_specific": {} 00:11:13.472 } 00:11:13.472 ] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.472 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.473 "name": "Existed_Raid", 00:11:13.473 "uuid": "e512c501-6bde-4069-b194-0a4e19067eb3", 00:11:13.473 "strip_size_kb": 64, 00:11:13.473 "state": "online", 00:11:13.473 "raid_level": "concat", 00:11:13.473 "superblock": false, 00:11:13.473 "num_base_bdevs": 3, 00:11:13.473 "num_base_bdevs_discovered": 3, 00:11:13.473 "num_base_bdevs_operational": 3, 00:11:13.473 "base_bdevs_list": [ 00:11:13.473 { 00:11:13.473 "name": "NewBaseBdev", 00:11:13.473 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:13.473 "is_configured": true, 00:11:13.473 "data_offset": 0, 00:11:13.473 "data_size": 65536 00:11:13.473 }, 00:11:13.473 { 00:11:13.473 "name": "BaseBdev2", 00:11:13.473 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:13.473 "is_configured": true, 00:11:13.473 "data_offset": 0, 00:11:13.473 "data_size": 65536 00:11:13.473 }, 00:11:13.473 { 00:11:13.473 "name": "BaseBdev3", 00:11:13.473 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:13.473 "is_configured": true, 00:11:13.473 "data_offset": 0, 00:11:13.473 "data_size": 65536 00:11:13.473 } 00:11:13.473 ] 00:11:13.473 }' 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.473 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.043 [2024-11-20 08:44:44.843441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.043 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.043 "name": "Existed_Raid", 00:11:14.043 "aliases": [ 00:11:14.043 "e512c501-6bde-4069-b194-0a4e19067eb3" 00:11:14.043 ], 00:11:14.043 "product_name": "Raid Volume", 00:11:14.043 "block_size": 512, 00:11:14.043 "num_blocks": 196608, 00:11:14.043 "uuid": "e512c501-6bde-4069-b194-0a4e19067eb3", 00:11:14.043 "assigned_rate_limits": { 00:11:14.043 "rw_ios_per_sec": 0, 00:11:14.043 "rw_mbytes_per_sec": 0, 00:11:14.043 "r_mbytes_per_sec": 0, 00:11:14.043 "w_mbytes_per_sec": 0 00:11:14.043 }, 00:11:14.043 "claimed": false, 00:11:14.043 "zoned": false, 00:11:14.043 "supported_io_types": { 00:11:14.043 "read": true, 00:11:14.043 "write": true, 00:11:14.043 "unmap": true, 00:11:14.043 "flush": true, 00:11:14.043 "reset": true, 00:11:14.043 "nvme_admin": false, 00:11:14.043 "nvme_io": false, 00:11:14.043 "nvme_io_md": false, 00:11:14.043 "write_zeroes": true, 00:11:14.043 "zcopy": false, 00:11:14.043 "get_zone_info": false, 00:11:14.043 "zone_management": false, 00:11:14.043 "zone_append": false, 00:11:14.043 "compare": false, 00:11:14.043 "compare_and_write": false, 00:11:14.043 "abort": false, 00:11:14.043 "seek_hole": false, 00:11:14.043 "seek_data": false, 00:11:14.043 "copy": false, 00:11:14.043 "nvme_iov_md": false 00:11:14.043 }, 00:11:14.043 "memory_domains": [ 00:11:14.043 { 00:11:14.043 "dma_device_id": "system", 00:11:14.043 "dma_device_type": 1 00:11:14.043 }, 00:11:14.043 { 00:11:14.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.044 "dma_device_type": 2 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "dma_device_id": "system", 00:11:14.044 "dma_device_type": 1 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.044 "dma_device_type": 2 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "dma_device_id": "system", 00:11:14.044 "dma_device_type": 1 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.044 "dma_device_type": 2 00:11:14.044 } 00:11:14.044 ], 00:11:14.044 "driver_specific": { 00:11:14.044 "raid": { 00:11:14.044 "uuid": "e512c501-6bde-4069-b194-0a4e19067eb3", 00:11:14.044 "strip_size_kb": 64, 00:11:14.044 "state": "online", 00:11:14.044 "raid_level": "concat", 00:11:14.044 "superblock": false, 00:11:14.044 "num_base_bdevs": 3, 00:11:14.044 "num_base_bdevs_discovered": 3, 00:11:14.044 "num_base_bdevs_operational": 3, 00:11:14.044 "base_bdevs_list": [ 00:11:14.044 { 00:11:14.044 "name": "NewBaseBdev", 00:11:14.044 "uuid": "6ca9afa2-7362-490b-a0ce-f990080ee955", 00:11:14.044 "is_configured": true, 00:11:14.044 "data_offset": 0, 00:11:14.044 "data_size": 65536 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "name": "BaseBdev2", 00:11:14.044 "uuid": "bcb9f1e9-07f5-406d-8322-be9b3a94a8cf", 00:11:14.044 "is_configured": true, 00:11:14.044 "data_offset": 0, 00:11:14.044 "data_size": 65536 00:11:14.044 }, 00:11:14.044 { 00:11:14.044 "name": "BaseBdev3", 00:11:14.044 "uuid": "f6333a2a-4b96-40ad-aa21-591182f27687", 00:11:14.044 "is_configured": true, 00:11:14.044 "data_offset": 0, 00:11:14.044 "data_size": 65536 00:11:14.044 } 00:11:14.044 ] 00:11:14.044 } 00:11:14.044 } 00:11:14.044 }' 00:11:14.044 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.303 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:14.303 BaseBdev2 00:11:14.303 BaseBdev3' 00:11:14.303 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.303 [2024-11-20 08:44:45.203204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.303 [2024-11-20 08:44:45.203251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.303 [2024-11-20 08:44:45.203356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.303 [2024-11-20 08:44:45.203441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.303 [2024-11-20 08:44:45.203462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65613 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65613 ']' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65613 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.303 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65613 00:11:14.561 killing process with pid 65613 00:11:14.561 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.561 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.561 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65613' 00:11:14.561 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65613 00:11:14.561 [2024-11-20 08:44:45.244428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.561 08:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65613 00:11:14.820 [2024-11-20 08:44:45.517156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.758 00:11:15.758 real 0m11.937s 00:11:15.758 user 0m19.882s 00:11:15.758 sys 0m1.602s 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.758 ************************************ 00:11:15.758 END TEST raid_state_function_test 00:11:15.758 ************************************ 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.758 08:44:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:15.758 08:44:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:15.758 08:44:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.758 08:44:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.758 ************************************ 00:11:15.758 START TEST raid_state_function_test_sb 00:11:15.758 ************************************ 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66252 00:11:15.758 Process raid pid: 66252 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66252' 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66252 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66252 ']' 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.758 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.037 [2024-11-20 08:44:46.761031] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:16.037 [2024-11-20 08:44:46.761238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.297 [2024-11-20 08:44:46.953495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.297 [2024-11-20 08:44:47.085333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.555 [2024-11-20 08:44:47.295381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.555 [2024-11-20 08:44:47.295442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 [2024-11-20 08:44:47.739227] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.124 [2024-11-20 08:44:47.739321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.124 [2024-11-20 08:44:47.739339] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.124 [2024-11-20 08:44:47.739357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.124 [2024-11-20 08:44:47.739368] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.124 [2024-11-20 08:44:47.739383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.124 "name": "Existed_Raid", 00:11:17.124 "uuid": "88445cad-6830-4547-bf0c-afc89bd1234b", 00:11:17.124 "strip_size_kb": 64, 00:11:17.124 "state": "configuring", 00:11:17.124 "raid_level": "concat", 00:11:17.124 "superblock": true, 00:11:17.124 "num_base_bdevs": 3, 00:11:17.124 "num_base_bdevs_discovered": 0, 00:11:17.124 "num_base_bdevs_operational": 3, 00:11:17.124 "base_bdevs_list": [ 00:11:17.124 { 00:11:17.124 "name": "BaseBdev1", 00:11:17.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.124 "is_configured": false, 00:11:17.124 "data_offset": 0, 00:11:17.124 "data_size": 0 00:11:17.124 }, 00:11:17.124 { 00:11:17.124 "name": "BaseBdev2", 00:11:17.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.124 "is_configured": false, 00:11:17.124 "data_offset": 0, 00:11:17.124 "data_size": 0 00:11:17.124 }, 00:11:17.124 { 00:11:17.124 "name": "BaseBdev3", 00:11:17.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.124 "is_configured": false, 00:11:17.124 "data_offset": 0, 00:11:17.124 "data_size": 0 00:11:17.124 } 00:11:17.124 ] 00:11:17.124 }' 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.124 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.383 [2024-11-20 08:44:48.271325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:17.383 [2024-11-20 08:44:48.271373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.383 [2024-11-20 08:44:48.279313] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.383 [2024-11-20 08:44:48.279372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.383 [2024-11-20 08:44:48.279388] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.383 [2024-11-20 08:44:48.279404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.383 [2024-11-20 08:44:48.279414] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.383 [2024-11-20 08:44:48.279427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.383 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.642 [2024-11-20 08:44:48.324840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.642 BaseBdev1 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.642 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.642 [ 00:11:17.642 { 00:11:17.642 "name": "BaseBdev1", 00:11:17.642 "aliases": [ 00:11:17.642 "616484c9-5869-47eb-a771-9249e1669461" 00:11:17.642 ], 00:11:17.642 "product_name": "Malloc disk", 00:11:17.642 "block_size": 512, 00:11:17.642 "num_blocks": 65536, 00:11:17.642 "uuid": "616484c9-5869-47eb-a771-9249e1669461", 00:11:17.642 "assigned_rate_limits": { 00:11:17.642 "rw_ios_per_sec": 0, 00:11:17.642 "rw_mbytes_per_sec": 0, 00:11:17.642 "r_mbytes_per_sec": 0, 00:11:17.642 "w_mbytes_per_sec": 0 00:11:17.642 }, 00:11:17.642 "claimed": true, 00:11:17.642 "claim_type": "exclusive_write", 00:11:17.642 "zoned": false, 00:11:17.643 "supported_io_types": { 00:11:17.643 "read": true, 00:11:17.643 "write": true, 00:11:17.643 "unmap": true, 00:11:17.643 "flush": true, 00:11:17.643 "reset": true, 00:11:17.643 "nvme_admin": false, 00:11:17.643 "nvme_io": false, 00:11:17.643 "nvme_io_md": false, 00:11:17.643 "write_zeroes": true, 00:11:17.643 "zcopy": true, 00:11:17.643 "get_zone_info": false, 00:11:17.643 "zone_management": false, 00:11:17.643 "zone_append": false, 00:11:17.643 "compare": false, 00:11:17.643 "compare_and_write": false, 00:11:17.643 "abort": true, 00:11:17.643 "seek_hole": false, 00:11:17.643 "seek_data": false, 00:11:17.643 "copy": true, 00:11:17.643 "nvme_iov_md": false 00:11:17.643 }, 00:11:17.643 "memory_domains": [ 00:11:17.643 { 00:11:17.643 "dma_device_id": "system", 00:11:17.643 "dma_device_type": 1 00:11:17.643 }, 00:11:17.643 { 00:11:17.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.643 "dma_device_type": 2 00:11:17.643 } 00:11:17.643 ], 00:11:17.643 "driver_specific": {} 00:11:17.643 } 00:11:17.643 ] 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.643 "name": "Existed_Raid", 00:11:17.643 "uuid": "74ff3b4d-2ae2-4420-8b94-d2bc60c4bf24", 00:11:17.643 "strip_size_kb": 64, 00:11:17.643 "state": "configuring", 00:11:17.643 "raid_level": "concat", 00:11:17.643 "superblock": true, 00:11:17.643 "num_base_bdevs": 3, 00:11:17.643 "num_base_bdevs_discovered": 1, 00:11:17.643 "num_base_bdevs_operational": 3, 00:11:17.643 "base_bdevs_list": [ 00:11:17.643 { 00:11:17.643 "name": "BaseBdev1", 00:11:17.643 "uuid": "616484c9-5869-47eb-a771-9249e1669461", 00:11:17.643 "is_configured": true, 00:11:17.643 "data_offset": 2048, 00:11:17.643 "data_size": 63488 00:11:17.643 }, 00:11:17.643 { 00:11:17.643 "name": "BaseBdev2", 00:11:17.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.643 "is_configured": false, 00:11:17.643 "data_offset": 0, 00:11:17.643 "data_size": 0 00:11:17.643 }, 00:11:17.643 { 00:11:17.643 "name": "BaseBdev3", 00:11:17.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.643 "is_configured": false, 00:11:17.643 "data_offset": 0, 00:11:17.643 "data_size": 0 00:11:17.643 } 00:11:17.643 ] 00:11:17.643 }' 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.643 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.211 [2024-11-20 08:44:48.825043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.211 [2024-11-20 08:44:48.825109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.211 [2024-11-20 08:44:48.833107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.211 [2024-11-20 08:44:48.835562] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.211 [2024-11-20 08:44:48.835633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.211 [2024-11-20 08:44:48.835657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.211 [2024-11-20 08:44:48.835673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.211 "name": "Existed_Raid", 00:11:18.211 "uuid": "9b4db003-f432-4739-849d-b1c3c374a462", 00:11:18.211 "strip_size_kb": 64, 00:11:18.211 "state": "configuring", 00:11:18.211 "raid_level": "concat", 00:11:18.211 "superblock": true, 00:11:18.211 "num_base_bdevs": 3, 00:11:18.211 "num_base_bdevs_discovered": 1, 00:11:18.211 "num_base_bdevs_operational": 3, 00:11:18.211 "base_bdevs_list": [ 00:11:18.211 { 00:11:18.211 "name": "BaseBdev1", 00:11:18.211 "uuid": "616484c9-5869-47eb-a771-9249e1669461", 00:11:18.211 "is_configured": true, 00:11:18.211 "data_offset": 2048, 00:11:18.211 "data_size": 63488 00:11:18.211 }, 00:11:18.211 { 00:11:18.211 "name": "BaseBdev2", 00:11:18.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.211 "is_configured": false, 00:11:18.211 "data_offset": 0, 00:11:18.211 "data_size": 0 00:11:18.211 }, 00:11:18.211 { 00:11:18.211 "name": "BaseBdev3", 00:11:18.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.211 "is_configured": false, 00:11:18.211 "data_offset": 0, 00:11:18.211 "data_size": 0 00:11:18.211 } 00:11:18.211 ] 00:11:18.211 }' 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.211 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.473 [2024-11-20 08:44:49.384063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.473 BaseBdev2 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.473 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 [ 00:11:18.732 { 00:11:18.732 "name": "BaseBdev2", 00:11:18.732 "aliases": [ 00:11:18.732 "91b928a9-d10d-425a-b316-defdd271f81e" 00:11:18.732 ], 00:11:18.732 "product_name": "Malloc disk", 00:11:18.732 "block_size": 512, 00:11:18.732 "num_blocks": 65536, 00:11:18.732 "uuid": "91b928a9-d10d-425a-b316-defdd271f81e", 00:11:18.732 "assigned_rate_limits": { 00:11:18.732 "rw_ios_per_sec": 0, 00:11:18.732 "rw_mbytes_per_sec": 0, 00:11:18.732 "r_mbytes_per_sec": 0, 00:11:18.732 "w_mbytes_per_sec": 0 00:11:18.732 }, 00:11:18.732 "claimed": true, 00:11:18.732 "claim_type": "exclusive_write", 00:11:18.732 "zoned": false, 00:11:18.732 "supported_io_types": { 00:11:18.732 "read": true, 00:11:18.732 "write": true, 00:11:18.732 "unmap": true, 00:11:18.732 "flush": true, 00:11:18.732 "reset": true, 00:11:18.732 "nvme_admin": false, 00:11:18.732 "nvme_io": false, 00:11:18.732 "nvme_io_md": false, 00:11:18.732 "write_zeroes": true, 00:11:18.732 "zcopy": true, 00:11:18.732 "get_zone_info": false, 00:11:18.732 "zone_management": false, 00:11:18.732 "zone_append": false, 00:11:18.732 "compare": false, 00:11:18.732 "compare_and_write": false, 00:11:18.732 "abort": true, 00:11:18.732 "seek_hole": false, 00:11:18.732 "seek_data": false, 00:11:18.732 "copy": true, 00:11:18.732 "nvme_iov_md": false 00:11:18.732 }, 00:11:18.732 "memory_domains": [ 00:11:18.732 { 00:11:18.732 "dma_device_id": "system", 00:11:18.732 "dma_device_type": 1 00:11:18.732 }, 00:11:18.732 { 00:11:18.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.732 "dma_device_type": 2 00:11:18.732 } 00:11:18.732 ], 00:11:18.732 "driver_specific": {} 00:11:18.732 } 00:11:18.732 ] 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.732 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.733 "name": "Existed_Raid", 00:11:18.733 "uuid": "9b4db003-f432-4739-849d-b1c3c374a462", 00:11:18.733 "strip_size_kb": 64, 00:11:18.733 "state": "configuring", 00:11:18.733 "raid_level": "concat", 00:11:18.733 "superblock": true, 00:11:18.733 "num_base_bdevs": 3, 00:11:18.733 "num_base_bdevs_discovered": 2, 00:11:18.733 "num_base_bdevs_operational": 3, 00:11:18.733 "base_bdevs_list": [ 00:11:18.733 { 00:11:18.733 "name": "BaseBdev1", 00:11:18.733 "uuid": "616484c9-5869-47eb-a771-9249e1669461", 00:11:18.733 "is_configured": true, 00:11:18.733 "data_offset": 2048, 00:11:18.733 "data_size": 63488 00:11:18.733 }, 00:11:18.733 { 00:11:18.733 "name": "BaseBdev2", 00:11:18.733 "uuid": "91b928a9-d10d-425a-b316-defdd271f81e", 00:11:18.733 "is_configured": true, 00:11:18.733 "data_offset": 2048, 00:11:18.733 "data_size": 63488 00:11:18.733 }, 00:11:18.733 { 00:11:18.733 "name": "BaseBdev3", 00:11:18.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.733 "is_configured": false, 00:11:18.733 "data_offset": 0, 00:11:18.733 "data_size": 0 00:11:18.733 } 00:11:18.733 ] 00:11:18.733 }' 00:11:18.733 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.733 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.302 [2024-11-20 08:44:49.991799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.302 [2024-11-20 08:44:49.992106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.302 [2024-11-20 08:44:49.992139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.302 BaseBdev3 00:11:19.302 [2024-11-20 08:44:49.992634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:19.302 [2024-11-20 08:44:49.992836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.302 [2024-11-20 08:44:49.992865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.302 [2024-11-20 08:44:49.993054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:19.302 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.303 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.303 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.303 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.303 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.303 [ 00:11:19.303 { 00:11:19.303 "name": "BaseBdev3", 00:11:19.303 "aliases": [ 00:11:19.303 "9b796358-b1a4-4ba1-9508-0b2519df2f7a" 00:11:19.303 ], 00:11:19.303 "product_name": "Malloc disk", 00:11:19.303 "block_size": 512, 00:11:19.303 "num_blocks": 65536, 00:11:19.303 "uuid": "9b796358-b1a4-4ba1-9508-0b2519df2f7a", 00:11:19.303 "assigned_rate_limits": { 00:11:19.303 "rw_ios_per_sec": 0, 00:11:19.303 "rw_mbytes_per_sec": 0, 00:11:19.303 "r_mbytes_per_sec": 0, 00:11:19.303 "w_mbytes_per_sec": 0 00:11:19.303 }, 00:11:19.303 "claimed": true, 00:11:19.303 "claim_type": "exclusive_write", 00:11:19.303 "zoned": false, 00:11:19.303 "supported_io_types": { 00:11:19.303 "read": true, 00:11:19.303 "write": true, 00:11:19.303 "unmap": true, 00:11:19.303 "flush": true, 00:11:19.303 "reset": true, 00:11:19.303 "nvme_admin": false, 00:11:19.303 "nvme_io": false, 00:11:19.303 "nvme_io_md": false, 00:11:19.303 "write_zeroes": true, 00:11:19.303 "zcopy": true, 00:11:19.303 "get_zone_info": false, 00:11:19.303 "zone_management": false, 00:11:19.303 "zone_append": false, 00:11:19.303 "compare": false, 00:11:19.303 "compare_and_write": false, 00:11:19.303 "abort": true, 00:11:19.303 "seek_hole": false, 00:11:19.303 "seek_data": false, 00:11:19.303 "copy": true, 00:11:19.303 "nvme_iov_md": false 00:11:19.303 }, 00:11:19.303 "memory_domains": [ 00:11:19.303 { 00:11:19.303 "dma_device_id": "system", 00:11:19.303 "dma_device_type": 1 00:11:19.303 }, 00:11:19.303 { 00:11:19.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.303 "dma_device_type": 2 00:11:19.303 } 00:11:19.303 ], 00:11:19.303 "driver_specific": {} 00:11:19.303 } 00:11:19.303 ] 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.303 "name": "Existed_Raid", 00:11:19.303 "uuid": "9b4db003-f432-4739-849d-b1c3c374a462", 00:11:19.303 "strip_size_kb": 64, 00:11:19.303 "state": "online", 00:11:19.303 "raid_level": "concat", 00:11:19.303 "superblock": true, 00:11:19.303 "num_base_bdevs": 3, 00:11:19.303 "num_base_bdevs_discovered": 3, 00:11:19.303 "num_base_bdevs_operational": 3, 00:11:19.303 "base_bdevs_list": [ 00:11:19.303 { 00:11:19.303 "name": "BaseBdev1", 00:11:19.303 "uuid": "616484c9-5869-47eb-a771-9249e1669461", 00:11:19.303 "is_configured": true, 00:11:19.303 "data_offset": 2048, 00:11:19.303 "data_size": 63488 00:11:19.303 }, 00:11:19.303 { 00:11:19.303 "name": "BaseBdev2", 00:11:19.303 "uuid": "91b928a9-d10d-425a-b316-defdd271f81e", 00:11:19.303 "is_configured": true, 00:11:19.303 "data_offset": 2048, 00:11:19.303 "data_size": 63488 00:11:19.303 }, 00:11:19.303 { 00:11:19.303 "name": "BaseBdev3", 00:11:19.303 "uuid": "9b796358-b1a4-4ba1-9508-0b2519df2f7a", 00:11:19.303 "is_configured": true, 00:11:19.303 "data_offset": 2048, 00:11:19.303 "data_size": 63488 00:11:19.303 } 00:11:19.303 ] 00:11:19.303 }' 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.303 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.872 [2024-11-20 08:44:50.576408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.872 "name": "Existed_Raid", 00:11:19.872 "aliases": [ 00:11:19.872 "9b4db003-f432-4739-849d-b1c3c374a462" 00:11:19.872 ], 00:11:19.872 "product_name": "Raid Volume", 00:11:19.872 "block_size": 512, 00:11:19.872 "num_blocks": 190464, 00:11:19.872 "uuid": "9b4db003-f432-4739-849d-b1c3c374a462", 00:11:19.872 "assigned_rate_limits": { 00:11:19.872 "rw_ios_per_sec": 0, 00:11:19.872 "rw_mbytes_per_sec": 0, 00:11:19.872 "r_mbytes_per_sec": 0, 00:11:19.872 "w_mbytes_per_sec": 0 00:11:19.872 }, 00:11:19.872 "claimed": false, 00:11:19.872 "zoned": false, 00:11:19.872 "supported_io_types": { 00:11:19.872 "read": true, 00:11:19.872 "write": true, 00:11:19.872 "unmap": true, 00:11:19.872 "flush": true, 00:11:19.872 "reset": true, 00:11:19.872 "nvme_admin": false, 00:11:19.872 "nvme_io": false, 00:11:19.872 "nvme_io_md": false, 00:11:19.872 "write_zeroes": true, 00:11:19.872 "zcopy": false, 00:11:19.872 "get_zone_info": false, 00:11:19.872 "zone_management": false, 00:11:19.872 "zone_append": false, 00:11:19.872 "compare": false, 00:11:19.872 "compare_and_write": false, 00:11:19.872 "abort": false, 00:11:19.872 "seek_hole": false, 00:11:19.872 "seek_data": false, 00:11:19.872 "copy": false, 00:11:19.872 "nvme_iov_md": false 00:11:19.872 }, 00:11:19.872 "memory_domains": [ 00:11:19.872 { 00:11:19.872 "dma_device_id": "system", 00:11:19.872 "dma_device_type": 1 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.872 "dma_device_type": 2 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "dma_device_id": "system", 00:11:19.872 "dma_device_type": 1 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.872 "dma_device_type": 2 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "dma_device_id": "system", 00:11:19.872 "dma_device_type": 1 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.872 "dma_device_type": 2 00:11:19.872 } 00:11:19.872 ], 00:11:19.872 "driver_specific": { 00:11:19.872 "raid": { 00:11:19.872 "uuid": "9b4db003-f432-4739-849d-b1c3c374a462", 00:11:19.872 "strip_size_kb": 64, 00:11:19.872 "state": "online", 00:11:19.872 "raid_level": "concat", 00:11:19.872 "superblock": true, 00:11:19.872 "num_base_bdevs": 3, 00:11:19.872 "num_base_bdevs_discovered": 3, 00:11:19.872 "num_base_bdevs_operational": 3, 00:11:19.872 "base_bdevs_list": [ 00:11:19.872 { 00:11:19.872 "name": "BaseBdev1", 00:11:19.872 "uuid": "616484c9-5869-47eb-a771-9249e1669461", 00:11:19.872 "is_configured": true, 00:11:19.872 "data_offset": 2048, 00:11:19.872 "data_size": 63488 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "name": "BaseBdev2", 00:11:19.872 "uuid": "91b928a9-d10d-425a-b316-defdd271f81e", 00:11:19.872 "is_configured": true, 00:11:19.872 "data_offset": 2048, 00:11:19.872 "data_size": 63488 00:11:19.872 }, 00:11:19.872 { 00:11:19.872 "name": "BaseBdev3", 00:11:19.872 "uuid": "9b796358-b1a4-4ba1-9508-0b2519df2f7a", 00:11:19.872 "is_configured": true, 00:11:19.872 "data_offset": 2048, 00:11:19.872 "data_size": 63488 00:11:19.872 } 00:11:19.872 ] 00:11:19.872 } 00:11:19.872 } 00:11:19.872 }' 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.872 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:19.872 BaseBdev2 00:11:19.872 BaseBdev3' 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.873 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.133 [2024-11-20 08:44:50.892207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.133 [2024-11-20 08:44:50.892246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.133 [2024-11-20 08:44:50.892317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.133 08:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.133 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.133 "name": "Existed_Raid", 00:11:20.133 "uuid": "9b4db003-f432-4739-849d-b1c3c374a462", 00:11:20.133 "strip_size_kb": 64, 00:11:20.133 "state": "offline", 00:11:20.133 "raid_level": "concat", 00:11:20.133 "superblock": true, 00:11:20.133 "num_base_bdevs": 3, 00:11:20.133 "num_base_bdevs_discovered": 2, 00:11:20.133 "num_base_bdevs_operational": 2, 00:11:20.133 "base_bdevs_list": [ 00:11:20.133 { 00:11:20.133 "name": null, 00:11:20.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.133 "is_configured": false, 00:11:20.133 "data_offset": 0, 00:11:20.133 "data_size": 63488 00:11:20.133 }, 00:11:20.133 { 00:11:20.133 "name": "BaseBdev2", 00:11:20.133 "uuid": "91b928a9-d10d-425a-b316-defdd271f81e", 00:11:20.133 "is_configured": true, 00:11:20.133 "data_offset": 2048, 00:11:20.133 "data_size": 63488 00:11:20.133 }, 00:11:20.133 { 00:11:20.133 "name": "BaseBdev3", 00:11:20.133 "uuid": "9b796358-b1a4-4ba1-9508-0b2519df2f7a", 00:11:20.133 "is_configured": true, 00:11:20.133 "data_offset": 2048, 00:11:20.133 "data_size": 63488 00:11:20.133 } 00:11:20.133 ] 00:11:20.133 }' 00:11:20.133 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.133 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.719 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.720 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.720 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:20.720 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.720 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.720 [2024-11-20 08:44:51.585695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.979 [2024-11-20 08:44:51.736164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.979 [2024-11-20 08:44:51.736234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.979 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.238 BaseBdev2 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.238 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.238 [ 00:11:21.238 { 00:11:21.238 "name": "BaseBdev2", 00:11:21.238 "aliases": [ 00:11:21.238 "c217e37d-734d-4f25-b6c2-9bd84594515d" 00:11:21.238 ], 00:11:21.238 "product_name": "Malloc disk", 00:11:21.238 "block_size": 512, 00:11:21.238 "num_blocks": 65536, 00:11:21.238 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:21.238 "assigned_rate_limits": { 00:11:21.238 "rw_ios_per_sec": 0, 00:11:21.238 "rw_mbytes_per_sec": 0, 00:11:21.238 "r_mbytes_per_sec": 0, 00:11:21.238 "w_mbytes_per_sec": 0 00:11:21.238 }, 00:11:21.238 "claimed": false, 00:11:21.238 "zoned": false, 00:11:21.238 "supported_io_types": { 00:11:21.238 "read": true, 00:11:21.238 "write": true, 00:11:21.238 "unmap": true, 00:11:21.238 "flush": true, 00:11:21.238 "reset": true, 00:11:21.238 "nvme_admin": false, 00:11:21.238 "nvme_io": false, 00:11:21.238 "nvme_io_md": false, 00:11:21.238 "write_zeroes": true, 00:11:21.238 "zcopy": true, 00:11:21.238 "get_zone_info": false, 00:11:21.238 "zone_management": false, 00:11:21.238 "zone_append": false, 00:11:21.238 "compare": false, 00:11:21.238 "compare_and_write": false, 00:11:21.238 "abort": true, 00:11:21.238 "seek_hole": false, 00:11:21.238 "seek_data": false, 00:11:21.238 "copy": true, 00:11:21.238 "nvme_iov_md": false 00:11:21.238 }, 00:11:21.238 "memory_domains": [ 00:11:21.238 { 00:11:21.238 "dma_device_id": "system", 00:11:21.238 "dma_device_type": 1 00:11:21.238 }, 00:11:21.238 { 00:11:21.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.238 "dma_device_type": 2 00:11:21.238 } 00:11:21.238 ], 00:11:21.238 "driver_specific": {} 00:11:21.238 } 00:11:21.238 ] 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.239 08:44:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.239 BaseBdev3 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.239 [ 00:11:21.239 { 00:11:21.239 "name": "BaseBdev3", 00:11:21.239 "aliases": [ 00:11:21.239 "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65" 00:11:21.239 ], 00:11:21.239 "product_name": "Malloc disk", 00:11:21.239 "block_size": 512, 00:11:21.239 "num_blocks": 65536, 00:11:21.239 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:21.239 "assigned_rate_limits": { 00:11:21.239 "rw_ios_per_sec": 0, 00:11:21.239 "rw_mbytes_per_sec": 0, 00:11:21.239 "r_mbytes_per_sec": 0, 00:11:21.239 "w_mbytes_per_sec": 0 00:11:21.239 }, 00:11:21.239 "claimed": false, 00:11:21.239 "zoned": false, 00:11:21.239 "supported_io_types": { 00:11:21.239 "read": true, 00:11:21.239 "write": true, 00:11:21.239 "unmap": true, 00:11:21.239 "flush": true, 00:11:21.239 "reset": true, 00:11:21.239 "nvme_admin": false, 00:11:21.239 "nvme_io": false, 00:11:21.239 "nvme_io_md": false, 00:11:21.239 "write_zeroes": true, 00:11:21.239 "zcopy": true, 00:11:21.239 "get_zone_info": false, 00:11:21.239 "zone_management": false, 00:11:21.239 "zone_append": false, 00:11:21.239 "compare": false, 00:11:21.239 "compare_and_write": false, 00:11:21.239 "abort": true, 00:11:21.239 "seek_hole": false, 00:11:21.239 "seek_data": false, 00:11:21.239 "copy": true, 00:11:21.239 "nvme_iov_md": false 00:11:21.239 }, 00:11:21.239 "memory_domains": [ 00:11:21.239 { 00:11:21.239 "dma_device_id": "system", 00:11:21.239 "dma_device_type": 1 00:11:21.239 }, 00:11:21.239 { 00:11:21.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.239 "dma_device_type": 2 00:11:21.239 } 00:11:21.239 ], 00:11:21.239 "driver_specific": {} 00:11:21.239 } 00:11:21.239 ] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.239 [2024-11-20 08:44:52.041928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.239 [2024-11-20 08:44:52.041990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.239 [2024-11-20 08:44:52.042029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.239 [2024-11-20 08:44:52.044459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.239 "name": "Existed_Raid", 00:11:21.239 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:21.239 "strip_size_kb": 64, 00:11:21.239 "state": "configuring", 00:11:21.239 "raid_level": "concat", 00:11:21.239 "superblock": true, 00:11:21.239 "num_base_bdevs": 3, 00:11:21.239 "num_base_bdevs_discovered": 2, 00:11:21.239 "num_base_bdevs_operational": 3, 00:11:21.239 "base_bdevs_list": [ 00:11:21.239 { 00:11:21.239 "name": "BaseBdev1", 00:11:21.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.239 "is_configured": false, 00:11:21.239 "data_offset": 0, 00:11:21.239 "data_size": 0 00:11:21.239 }, 00:11:21.239 { 00:11:21.239 "name": "BaseBdev2", 00:11:21.239 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:21.239 "is_configured": true, 00:11:21.239 "data_offset": 2048, 00:11:21.239 "data_size": 63488 00:11:21.239 }, 00:11:21.239 { 00:11:21.239 "name": "BaseBdev3", 00:11:21.239 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:21.239 "is_configured": true, 00:11:21.239 "data_offset": 2048, 00:11:21.239 "data_size": 63488 00:11:21.239 } 00:11:21.239 ] 00:11:21.239 }' 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.239 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.807 [2024-11-20 08:44:52.562013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.807 "name": "Existed_Raid", 00:11:21.807 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:21.807 "strip_size_kb": 64, 00:11:21.807 "state": "configuring", 00:11:21.807 "raid_level": "concat", 00:11:21.807 "superblock": true, 00:11:21.807 "num_base_bdevs": 3, 00:11:21.807 "num_base_bdevs_discovered": 1, 00:11:21.807 "num_base_bdevs_operational": 3, 00:11:21.807 "base_bdevs_list": [ 00:11:21.807 { 00:11:21.807 "name": "BaseBdev1", 00:11:21.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.807 "is_configured": false, 00:11:21.807 "data_offset": 0, 00:11:21.807 "data_size": 0 00:11:21.807 }, 00:11:21.807 { 00:11:21.807 "name": null, 00:11:21.807 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:21.807 "is_configured": false, 00:11:21.807 "data_offset": 0, 00:11:21.807 "data_size": 63488 00:11:21.807 }, 00:11:21.807 { 00:11:21.807 "name": "BaseBdev3", 00:11:21.807 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:21.807 "is_configured": true, 00:11:21.807 "data_offset": 2048, 00:11:21.807 "data_size": 63488 00:11:21.807 } 00:11:21.807 ] 00:11:21.807 }' 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.807 08:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.375 [2024-11-20 08:44:53.171542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.375 BaseBdev1 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.375 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.375 [ 00:11:22.375 { 00:11:22.375 "name": "BaseBdev1", 00:11:22.375 "aliases": [ 00:11:22.375 "59fbf3c2-7a00-42ad-befc-0545c95e9d26" 00:11:22.375 ], 00:11:22.375 "product_name": "Malloc disk", 00:11:22.375 "block_size": 512, 00:11:22.375 "num_blocks": 65536, 00:11:22.375 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:22.375 "assigned_rate_limits": { 00:11:22.375 "rw_ios_per_sec": 0, 00:11:22.375 "rw_mbytes_per_sec": 0, 00:11:22.375 "r_mbytes_per_sec": 0, 00:11:22.375 "w_mbytes_per_sec": 0 00:11:22.375 }, 00:11:22.375 "claimed": true, 00:11:22.375 "claim_type": "exclusive_write", 00:11:22.375 "zoned": false, 00:11:22.375 "supported_io_types": { 00:11:22.375 "read": true, 00:11:22.375 "write": true, 00:11:22.375 "unmap": true, 00:11:22.375 "flush": true, 00:11:22.375 "reset": true, 00:11:22.375 "nvme_admin": false, 00:11:22.375 "nvme_io": false, 00:11:22.375 "nvme_io_md": false, 00:11:22.375 "write_zeroes": true, 00:11:22.375 "zcopy": true, 00:11:22.375 "get_zone_info": false, 00:11:22.375 "zone_management": false, 00:11:22.375 "zone_append": false, 00:11:22.375 "compare": false, 00:11:22.375 "compare_and_write": false, 00:11:22.375 "abort": true, 00:11:22.375 "seek_hole": false, 00:11:22.375 "seek_data": false, 00:11:22.375 "copy": true, 00:11:22.375 "nvme_iov_md": false 00:11:22.375 }, 00:11:22.375 "memory_domains": [ 00:11:22.375 { 00:11:22.375 "dma_device_id": "system", 00:11:22.375 "dma_device_type": 1 00:11:22.375 }, 00:11:22.375 { 00:11:22.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.376 "dma_device_type": 2 00:11:22.376 } 00:11:22.376 ], 00:11:22.376 "driver_specific": {} 00:11:22.376 } 00:11:22.376 ] 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.376 "name": "Existed_Raid", 00:11:22.376 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:22.376 "strip_size_kb": 64, 00:11:22.376 "state": "configuring", 00:11:22.376 "raid_level": "concat", 00:11:22.376 "superblock": true, 00:11:22.376 "num_base_bdevs": 3, 00:11:22.376 "num_base_bdevs_discovered": 2, 00:11:22.376 "num_base_bdevs_operational": 3, 00:11:22.376 "base_bdevs_list": [ 00:11:22.376 { 00:11:22.376 "name": "BaseBdev1", 00:11:22.376 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:22.376 "is_configured": true, 00:11:22.376 "data_offset": 2048, 00:11:22.376 "data_size": 63488 00:11:22.376 }, 00:11:22.376 { 00:11:22.376 "name": null, 00:11:22.376 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:22.376 "is_configured": false, 00:11:22.376 "data_offset": 0, 00:11:22.376 "data_size": 63488 00:11:22.376 }, 00:11:22.376 { 00:11:22.376 "name": "BaseBdev3", 00:11:22.376 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:22.376 "is_configured": true, 00:11:22.376 "data_offset": 2048, 00:11:22.376 "data_size": 63488 00:11:22.376 } 00:11:22.376 ] 00:11:22.376 }' 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.376 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.944 [2024-11-20 08:44:53.747772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.944 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.945 "name": "Existed_Raid", 00:11:22.945 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:22.945 "strip_size_kb": 64, 00:11:22.945 "state": "configuring", 00:11:22.945 "raid_level": "concat", 00:11:22.945 "superblock": true, 00:11:22.945 "num_base_bdevs": 3, 00:11:22.945 "num_base_bdevs_discovered": 1, 00:11:22.945 "num_base_bdevs_operational": 3, 00:11:22.945 "base_bdevs_list": [ 00:11:22.945 { 00:11:22.945 "name": "BaseBdev1", 00:11:22.945 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:22.945 "is_configured": true, 00:11:22.945 "data_offset": 2048, 00:11:22.945 "data_size": 63488 00:11:22.945 }, 00:11:22.945 { 00:11:22.945 "name": null, 00:11:22.945 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:22.945 "is_configured": false, 00:11:22.945 "data_offset": 0, 00:11:22.945 "data_size": 63488 00:11:22.945 }, 00:11:22.945 { 00:11:22.945 "name": null, 00:11:22.945 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:22.945 "is_configured": false, 00:11:22.945 "data_offset": 0, 00:11:22.945 "data_size": 63488 00:11:22.945 } 00:11:22.945 ] 00:11:22.945 }' 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.945 08:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.512 [2024-11-20 08:44:54.347972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.512 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.513 "name": "Existed_Raid", 00:11:23.513 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:23.513 "strip_size_kb": 64, 00:11:23.513 "state": "configuring", 00:11:23.513 "raid_level": "concat", 00:11:23.513 "superblock": true, 00:11:23.513 "num_base_bdevs": 3, 00:11:23.513 "num_base_bdevs_discovered": 2, 00:11:23.513 "num_base_bdevs_operational": 3, 00:11:23.513 "base_bdevs_list": [ 00:11:23.513 { 00:11:23.513 "name": "BaseBdev1", 00:11:23.513 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:23.513 "is_configured": true, 00:11:23.513 "data_offset": 2048, 00:11:23.513 "data_size": 63488 00:11:23.513 }, 00:11:23.513 { 00:11:23.513 "name": null, 00:11:23.513 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:23.513 "is_configured": false, 00:11:23.513 "data_offset": 0, 00:11:23.513 "data_size": 63488 00:11:23.513 }, 00:11:23.513 { 00:11:23.513 "name": "BaseBdev3", 00:11:23.513 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:23.513 "is_configured": true, 00:11:23.513 "data_offset": 2048, 00:11:23.513 "data_size": 63488 00:11:23.513 } 00:11:23.513 ] 00:11:23.513 }' 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.513 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.080 08:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.080 [2024-11-20 08:44:54.944138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.339 "name": "Existed_Raid", 00:11:24.339 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:24.339 "strip_size_kb": 64, 00:11:24.339 "state": "configuring", 00:11:24.339 "raid_level": "concat", 00:11:24.339 "superblock": true, 00:11:24.339 "num_base_bdevs": 3, 00:11:24.339 "num_base_bdevs_discovered": 1, 00:11:24.339 "num_base_bdevs_operational": 3, 00:11:24.339 "base_bdevs_list": [ 00:11:24.339 { 00:11:24.339 "name": null, 00:11:24.339 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:24.339 "is_configured": false, 00:11:24.339 "data_offset": 0, 00:11:24.339 "data_size": 63488 00:11:24.339 }, 00:11:24.339 { 00:11:24.339 "name": null, 00:11:24.339 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:24.339 "is_configured": false, 00:11:24.339 "data_offset": 0, 00:11:24.339 "data_size": 63488 00:11:24.339 }, 00:11:24.339 { 00:11:24.339 "name": "BaseBdev3", 00:11:24.339 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:24.339 "is_configured": true, 00:11:24.339 "data_offset": 2048, 00:11:24.339 "data_size": 63488 00:11:24.339 } 00:11:24.339 ] 00:11:24.339 }' 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.339 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.907 [2024-11-20 08:44:55.569014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.907 "name": "Existed_Raid", 00:11:24.907 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:24.907 "strip_size_kb": 64, 00:11:24.907 "state": "configuring", 00:11:24.907 "raid_level": "concat", 00:11:24.907 "superblock": true, 00:11:24.907 "num_base_bdevs": 3, 00:11:24.907 "num_base_bdevs_discovered": 2, 00:11:24.907 "num_base_bdevs_operational": 3, 00:11:24.907 "base_bdevs_list": [ 00:11:24.907 { 00:11:24.907 "name": null, 00:11:24.907 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:24.907 "is_configured": false, 00:11:24.907 "data_offset": 0, 00:11:24.907 "data_size": 63488 00:11:24.907 }, 00:11:24.907 { 00:11:24.907 "name": "BaseBdev2", 00:11:24.907 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:24.907 "is_configured": true, 00:11:24.907 "data_offset": 2048, 00:11:24.907 "data_size": 63488 00:11:24.907 }, 00:11:24.907 { 00:11:24.907 "name": "BaseBdev3", 00:11:24.907 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:24.907 "is_configured": true, 00:11:24.907 "data_offset": 2048, 00:11:24.907 "data_size": 63488 00:11:24.907 } 00:11:24.907 ] 00:11:24.907 }' 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.907 08:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.166 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.166 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.166 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.166 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59fbf3c2-7a00-42ad-befc-0545c95e9d26 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.478 [2024-11-20 08:44:56.214671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:25.478 [2024-11-20 08:44:56.214952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:25.478 [2024-11-20 08:44:56.214976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:25.478 [2024-11-20 08:44:56.215313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:25.478 [2024-11-20 08:44:56.215505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:25.478 [2024-11-20 08:44:56.215522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:11:25.478 id_bdev 0x617000008200 00:11:25.478 [2024-11-20 08:44:56.215709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.478 [ 00:11:25.478 { 00:11:25.478 "name": "NewBaseBdev", 00:11:25.478 "aliases": [ 00:11:25.478 "59fbf3c2-7a00-42ad-befc-0545c95e9d26" 00:11:25.478 ], 00:11:25.478 "product_name": "Malloc disk", 00:11:25.478 "block_size": 512, 00:11:25.478 "num_blocks": 65536, 00:11:25.478 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:25.478 "assigned_rate_limits": { 00:11:25.478 "rw_ios_per_sec": 0, 00:11:25.478 "rw_mbytes_per_sec": 0, 00:11:25.478 "r_mbytes_per_sec": 0, 00:11:25.478 "w_mbytes_per_sec": 0 00:11:25.478 }, 00:11:25.478 "claimed": true, 00:11:25.478 "claim_type": "exclusive_write", 00:11:25.478 "zoned": false, 00:11:25.478 "supported_io_types": { 00:11:25.478 "read": true, 00:11:25.478 "write": true, 00:11:25.478 "unmap": true, 00:11:25.478 "flush": true, 00:11:25.478 "reset": true, 00:11:25.478 "nvme_admin": false, 00:11:25.478 "nvme_io": false, 00:11:25.478 "nvme_io_md": false, 00:11:25.478 "write_zeroes": true, 00:11:25.478 "zcopy": true, 00:11:25.478 "get_zone_info": false, 00:11:25.478 "zone_management": false, 00:11:25.478 "zone_append": false, 00:11:25.478 "compare": false, 00:11:25.478 "compare_and_write": false, 00:11:25.478 "abort": true, 00:11:25.478 "seek_hole": false, 00:11:25.478 "seek_data": false, 00:11:25.478 "copy": true, 00:11:25.478 "nvme_iov_md": false 00:11:25.478 }, 00:11:25.478 "memory_domains": [ 00:11:25.478 { 00:11:25.478 "dma_device_id": "system", 00:11:25.478 "dma_device_type": 1 00:11:25.478 }, 00:11:25.478 { 00:11:25.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.478 "dma_device_type": 2 00:11:25.478 } 00:11:25.478 ], 00:11:25.478 "driver_specific": {} 00:11:25.478 } 00:11:25.478 ] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.478 "name": "Existed_Raid", 00:11:25.478 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:25.478 "strip_size_kb": 64, 00:11:25.478 "state": "online", 00:11:25.478 "raid_level": "concat", 00:11:25.478 "superblock": true, 00:11:25.478 "num_base_bdevs": 3, 00:11:25.478 "num_base_bdevs_discovered": 3, 00:11:25.478 "num_base_bdevs_operational": 3, 00:11:25.478 "base_bdevs_list": [ 00:11:25.478 { 00:11:25.478 "name": "NewBaseBdev", 00:11:25.478 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:25.478 "is_configured": true, 00:11:25.478 "data_offset": 2048, 00:11:25.478 "data_size": 63488 00:11:25.478 }, 00:11:25.478 { 00:11:25.478 "name": "BaseBdev2", 00:11:25.478 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:25.478 "is_configured": true, 00:11:25.478 "data_offset": 2048, 00:11:25.478 "data_size": 63488 00:11:25.478 }, 00:11:25.478 { 00:11:25.478 "name": "BaseBdev3", 00:11:25.478 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:25.478 "is_configured": true, 00:11:25.478 "data_offset": 2048, 00:11:25.478 "data_size": 63488 00:11:25.478 } 00:11:25.478 ] 00:11:25.478 }' 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.478 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.047 [2024-11-20 08:44:56.779258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.047 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.048 "name": "Existed_Raid", 00:11:26.048 "aliases": [ 00:11:26.048 "54598712-d2b4-421c-b49d-66a974c624c5" 00:11:26.048 ], 00:11:26.048 "product_name": "Raid Volume", 00:11:26.048 "block_size": 512, 00:11:26.048 "num_blocks": 190464, 00:11:26.048 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:26.048 "assigned_rate_limits": { 00:11:26.048 "rw_ios_per_sec": 0, 00:11:26.048 "rw_mbytes_per_sec": 0, 00:11:26.048 "r_mbytes_per_sec": 0, 00:11:26.048 "w_mbytes_per_sec": 0 00:11:26.048 }, 00:11:26.048 "claimed": false, 00:11:26.048 "zoned": false, 00:11:26.048 "supported_io_types": { 00:11:26.048 "read": true, 00:11:26.048 "write": true, 00:11:26.048 "unmap": true, 00:11:26.048 "flush": true, 00:11:26.048 "reset": true, 00:11:26.048 "nvme_admin": false, 00:11:26.048 "nvme_io": false, 00:11:26.048 "nvme_io_md": false, 00:11:26.048 "write_zeroes": true, 00:11:26.048 "zcopy": false, 00:11:26.048 "get_zone_info": false, 00:11:26.048 "zone_management": false, 00:11:26.048 "zone_append": false, 00:11:26.048 "compare": false, 00:11:26.048 "compare_and_write": false, 00:11:26.048 "abort": false, 00:11:26.048 "seek_hole": false, 00:11:26.048 "seek_data": false, 00:11:26.048 "copy": false, 00:11:26.048 "nvme_iov_md": false 00:11:26.048 }, 00:11:26.048 "memory_domains": [ 00:11:26.048 { 00:11:26.048 "dma_device_id": "system", 00:11:26.048 "dma_device_type": 1 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.048 "dma_device_type": 2 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "dma_device_id": "system", 00:11:26.048 "dma_device_type": 1 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.048 "dma_device_type": 2 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "dma_device_id": "system", 00:11:26.048 "dma_device_type": 1 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.048 "dma_device_type": 2 00:11:26.048 } 00:11:26.048 ], 00:11:26.048 "driver_specific": { 00:11:26.048 "raid": { 00:11:26.048 "uuid": "54598712-d2b4-421c-b49d-66a974c624c5", 00:11:26.048 "strip_size_kb": 64, 00:11:26.048 "state": "online", 00:11:26.048 "raid_level": "concat", 00:11:26.048 "superblock": true, 00:11:26.048 "num_base_bdevs": 3, 00:11:26.048 "num_base_bdevs_discovered": 3, 00:11:26.048 "num_base_bdevs_operational": 3, 00:11:26.048 "base_bdevs_list": [ 00:11:26.048 { 00:11:26.048 "name": "NewBaseBdev", 00:11:26.048 "uuid": "59fbf3c2-7a00-42ad-befc-0545c95e9d26", 00:11:26.048 "is_configured": true, 00:11:26.048 "data_offset": 2048, 00:11:26.048 "data_size": 63488 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "name": "BaseBdev2", 00:11:26.048 "uuid": "c217e37d-734d-4f25-b6c2-9bd84594515d", 00:11:26.048 "is_configured": true, 00:11:26.048 "data_offset": 2048, 00:11:26.048 "data_size": 63488 00:11:26.048 }, 00:11:26.048 { 00:11:26.048 "name": "BaseBdev3", 00:11:26.048 "uuid": "c0d2f5e4-e984-4d37-89e7-2f4c110f9c65", 00:11:26.048 "is_configured": true, 00:11:26.048 "data_offset": 2048, 00:11:26.048 "data_size": 63488 00:11:26.048 } 00:11:26.048 ] 00:11:26.048 } 00:11:26.048 } 00:11:26.048 }' 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:26.048 BaseBdev2 00:11:26.048 BaseBdev3' 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.048 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.307 08:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.307 [2024-11-20 08:44:57.066980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.307 [2024-11-20 08:44:57.067020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.307 [2024-11-20 08:44:57.067133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.307 [2024-11-20 08:44:57.067230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.307 [2024-11-20 08:44:57.067260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66252 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66252 ']' 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66252 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66252 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.307 killing process with pid 66252 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66252' 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66252 00:11:26.307 [2024-11-20 08:44:57.100077] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.307 08:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66252 00:11:26.567 [2024-11-20 08:44:57.372864] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.502 08:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:27.502 00:11:27.502 real 0m11.761s 00:11:27.502 user 0m19.589s 00:11:27.502 sys 0m1.556s 00:11:27.502 08:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.502 ************************************ 00:11:27.502 END TEST raid_state_function_test_sb 00:11:27.502 ************************************ 00:11:27.502 08:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.761 08:44:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:27.761 08:44:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.761 08:44:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.761 08:44:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.761 ************************************ 00:11:27.761 START TEST raid_superblock_test 00:11:27.761 ************************************ 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66883 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66883 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66883 ']' 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.761 08:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.761 [2024-11-20 08:44:58.545276] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:27.761 [2024-11-20 08:44:58.545440] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66883 ] 00:11:28.021 [2024-11-20 08:44:58.720368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.021 [2024-11-20 08:44:58.848440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.280 [2024-11-20 08:44:59.051625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.280 [2024-11-20 08:44:59.051688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 malloc1 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 [2024-11-20 08:44:59.514095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.848 [2024-11-20 08:44:59.514185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.848 [2024-11-20 08:44:59.514222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:28.848 [2024-11-20 08:44:59.514239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.848 [2024-11-20 08:44:59.517041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.848 [2024-11-20 08:44:59.517092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.848 pt1 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 malloc2 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 [2024-11-20 08:44:59.566072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.848 [2024-11-20 08:44:59.566171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.848 [2024-11-20 08:44:59.566205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:28.848 [2024-11-20 08:44:59.566221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.848 [2024-11-20 08:44:59.569073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.848 [2024-11-20 08:44:59.569135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.848 pt2 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 malloc3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 [2024-11-20 08:44:59.624462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:28.848 [2024-11-20 08:44:59.624535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.848 [2024-11-20 08:44:59.624568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:28.848 [2024-11-20 08:44:59.624584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.848 [2024-11-20 08:44:59.627427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.848 [2024-11-20 08:44:59.627471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:28.848 pt3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.848 [2024-11-20 08:44:59.632510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.848 [2024-11-20 08:44:59.634913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.848 [2024-11-20 08:44:59.635022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:28.848 [2024-11-20 08:44:59.635254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:28.848 [2024-11-20 08:44:59.635295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:28.848 [2024-11-20 08:44:59.635650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.848 [2024-11-20 08:44:59.635862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:28.848 [2024-11-20 08:44:59.635880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:28.848 [2024-11-20 08:44:59.636063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.848 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.849 "name": "raid_bdev1", 00:11:28.849 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:28.849 "strip_size_kb": 64, 00:11:28.849 "state": "online", 00:11:28.849 "raid_level": "concat", 00:11:28.849 "superblock": true, 00:11:28.849 "num_base_bdevs": 3, 00:11:28.849 "num_base_bdevs_discovered": 3, 00:11:28.849 "num_base_bdevs_operational": 3, 00:11:28.849 "base_bdevs_list": [ 00:11:28.849 { 00:11:28.849 "name": "pt1", 00:11:28.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.849 "is_configured": true, 00:11:28.849 "data_offset": 2048, 00:11:28.849 "data_size": 63488 00:11:28.849 }, 00:11:28.849 { 00:11:28.849 "name": "pt2", 00:11:28.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.849 "is_configured": true, 00:11:28.849 "data_offset": 2048, 00:11:28.849 "data_size": 63488 00:11:28.849 }, 00:11:28.849 { 00:11:28.849 "name": "pt3", 00:11:28.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.849 "is_configured": true, 00:11:28.849 "data_offset": 2048, 00:11:28.849 "data_size": 63488 00:11:28.849 } 00:11:28.849 ] 00:11:28.849 }' 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.849 08:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.417 [2024-11-20 08:45:00.237021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.417 "name": "raid_bdev1", 00:11:29.417 "aliases": [ 00:11:29.417 "985154e0-0060-44ba-ae3b-22cab62bcc92" 00:11:29.417 ], 00:11:29.417 "product_name": "Raid Volume", 00:11:29.417 "block_size": 512, 00:11:29.417 "num_blocks": 190464, 00:11:29.417 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:29.417 "assigned_rate_limits": { 00:11:29.417 "rw_ios_per_sec": 0, 00:11:29.417 "rw_mbytes_per_sec": 0, 00:11:29.417 "r_mbytes_per_sec": 0, 00:11:29.417 "w_mbytes_per_sec": 0 00:11:29.417 }, 00:11:29.417 "claimed": false, 00:11:29.417 "zoned": false, 00:11:29.417 "supported_io_types": { 00:11:29.417 "read": true, 00:11:29.417 "write": true, 00:11:29.417 "unmap": true, 00:11:29.417 "flush": true, 00:11:29.417 "reset": true, 00:11:29.417 "nvme_admin": false, 00:11:29.417 "nvme_io": false, 00:11:29.417 "nvme_io_md": false, 00:11:29.417 "write_zeroes": true, 00:11:29.417 "zcopy": false, 00:11:29.417 "get_zone_info": false, 00:11:29.417 "zone_management": false, 00:11:29.417 "zone_append": false, 00:11:29.417 "compare": false, 00:11:29.417 "compare_and_write": false, 00:11:29.417 "abort": false, 00:11:29.417 "seek_hole": false, 00:11:29.417 "seek_data": false, 00:11:29.417 "copy": false, 00:11:29.417 "nvme_iov_md": false 00:11:29.417 }, 00:11:29.417 "memory_domains": [ 00:11:29.417 { 00:11:29.417 "dma_device_id": "system", 00:11:29.417 "dma_device_type": 1 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.417 "dma_device_type": 2 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "dma_device_id": "system", 00:11:29.417 "dma_device_type": 1 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.417 "dma_device_type": 2 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "dma_device_id": "system", 00:11:29.417 "dma_device_type": 1 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.417 "dma_device_type": 2 00:11:29.417 } 00:11:29.417 ], 00:11:29.417 "driver_specific": { 00:11:29.417 "raid": { 00:11:29.417 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:29.417 "strip_size_kb": 64, 00:11:29.417 "state": "online", 00:11:29.417 "raid_level": "concat", 00:11:29.417 "superblock": true, 00:11:29.417 "num_base_bdevs": 3, 00:11:29.417 "num_base_bdevs_discovered": 3, 00:11:29.417 "num_base_bdevs_operational": 3, 00:11:29.417 "base_bdevs_list": [ 00:11:29.417 { 00:11:29.417 "name": "pt1", 00:11:29.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.417 "is_configured": true, 00:11:29.417 "data_offset": 2048, 00:11:29.417 "data_size": 63488 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "name": "pt2", 00:11:29.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.417 "is_configured": true, 00:11:29.417 "data_offset": 2048, 00:11:29.417 "data_size": 63488 00:11:29.417 }, 00:11:29.417 { 00:11:29.417 "name": "pt3", 00:11:29.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.417 "is_configured": true, 00:11:29.417 "data_offset": 2048, 00:11:29.417 "data_size": 63488 00:11:29.417 } 00:11:29.417 ] 00:11:29.417 } 00:11:29.417 } 00:11:29.417 }' 00:11:29.417 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:29.677 pt2 00:11:29.677 pt3' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.677 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 [2024-11-20 08:45:00.557127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.678 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=985154e0-0060-44ba-ae3b-22cab62bcc92 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 985154e0-0060-44ba-ae3b-22cab62bcc92 ']' 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 [2024-11-20 08:45:00.600815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.958 [2024-11-20 08:45:00.600870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.958 [2024-11-20 08:45:00.600989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.958 [2024-11-20 08:45:00.601068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.958 [2024-11-20 08:45:00.601100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 [2024-11-20 08:45:00.748987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:29.958 [2024-11-20 08:45:00.751504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:29.958 [2024-11-20 08:45:00.751584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:29.958 [2024-11-20 08:45:00.751677] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:29.958 [2024-11-20 08:45:00.751755] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:29.958 [2024-11-20 08:45:00.751795] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:29.958 [2024-11-20 08:45:00.751822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.958 [2024-11-20 08:45:00.751840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:29.958 request: 00:11:29.958 { 00:11:29.958 "name": "raid_bdev1", 00:11:29.958 "raid_level": "concat", 00:11:29.958 "base_bdevs": [ 00:11:29.958 "malloc1", 00:11:29.958 "malloc2", 00:11:29.958 "malloc3" 00:11:29.958 ], 00:11:29.958 "strip_size_kb": 64, 00:11:29.958 "superblock": false, 00:11:29.958 "method": "bdev_raid_create", 00:11:29.958 "req_id": 1 00:11:29.958 } 00:11:29.958 Got JSON-RPC error response 00:11:29.958 response: 00:11:29.958 { 00:11:29.958 "code": -17, 00:11:29.958 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:29.958 } 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.958 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.959 [2024-11-20 08:45:00.820928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:29.959 [2024-11-20 08:45:00.821017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.959 [2024-11-20 08:45:00.821049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:29.959 [2024-11-20 08:45:00.821065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.959 [2024-11-20 08:45:00.824082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.959 [2024-11-20 08:45:00.824163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:29.959 [2024-11-20 08:45:00.824273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:29.959 [2024-11-20 08:45:00.824342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:29.959 pt1 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.959 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.217 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.217 "name": "raid_bdev1", 00:11:30.217 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:30.217 "strip_size_kb": 64, 00:11:30.217 "state": "configuring", 00:11:30.217 "raid_level": "concat", 00:11:30.217 "superblock": true, 00:11:30.217 "num_base_bdevs": 3, 00:11:30.217 "num_base_bdevs_discovered": 1, 00:11:30.217 "num_base_bdevs_operational": 3, 00:11:30.217 "base_bdevs_list": [ 00:11:30.217 { 00:11:30.217 "name": "pt1", 00:11:30.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:30.217 "is_configured": true, 00:11:30.217 "data_offset": 2048, 00:11:30.217 "data_size": 63488 00:11:30.217 }, 00:11:30.217 { 00:11:30.217 "name": null, 00:11:30.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.217 "is_configured": false, 00:11:30.217 "data_offset": 2048, 00:11:30.217 "data_size": 63488 00:11:30.217 }, 00:11:30.217 { 00:11:30.217 "name": null, 00:11:30.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.217 "is_configured": false, 00:11:30.217 "data_offset": 2048, 00:11:30.217 "data_size": 63488 00:11:30.217 } 00:11:30.217 ] 00:11:30.217 }' 00:11:30.217 08:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.217 08:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.475 [2024-11-20 08:45:01.349079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:30.475 [2024-11-20 08:45:01.349175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.475 [2024-11-20 08:45:01.349212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:30.475 [2024-11-20 08:45:01.349228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.475 [2024-11-20 08:45:01.349777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.475 [2024-11-20 08:45:01.349814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:30.475 [2024-11-20 08:45:01.349922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:30.475 [2024-11-20 08:45:01.349953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:30.475 pt2 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.475 [2024-11-20 08:45:01.361093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.475 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.734 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.734 "name": "raid_bdev1", 00:11:30.734 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:30.734 "strip_size_kb": 64, 00:11:30.734 "state": "configuring", 00:11:30.734 "raid_level": "concat", 00:11:30.734 "superblock": true, 00:11:30.734 "num_base_bdevs": 3, 00:11:30.734 "num_base_bdevs_discovered": 1, 00:11:30.734 "num_base_bdevs_operational": 3, 00:11:30.734 "base_bdevs_list": [ 00:11:30.734 { 00:11:30.734 "name": "pt1", 00:11:30.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:30.734 "is_configured": true, 00:11:30.734 "data_offset": 2048, 00:11:30.734 "data_size": 63488 00:11:30.734 }, 00:11:30.734 { 00:11:30.734 "name": null, 00:11:30.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.734 "is_configured": false, 00:11:30.734 "data_offset": 0, 00:11:30.734 "data_size": 63488 00:11:30.734 }, 00:11:30.734 { 00:11:30.734 "name": null, 00:11:30.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.734 "is_configured": false, 00:11:30.734 "data_offset": 2048, 00:11:30.734 "data_size": 63488 00:11:30.734 } 00:11:30.734 ] 00:11:30.734 }' 00:11:30.734 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.734 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.995 [2024-11-20 08:45:01.897194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:30.995 [2024-11-20 08:45:01.897289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.995 [2024-11-20 08:45:01.897317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:30.995 [2024-11-20 08:45:01.897336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.995 [2024-11-20 08:45:01.897903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.995 [2024-11-20 08:45:01.897945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:30.995 [2024-11-20 08:45:01.898048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:30.995 [2024-11-20 08:45:01.898085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:30.995 pt2 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.995 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.995 [2024-11-20 08:45:01.905188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:30.995 [2024-11-20 08:45:01.905249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.995 [2024-11-20 08:45:01.905272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.995 [2024-11-20 08:45:01.905290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.995 [2024-11-20 08:45:01.905806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.995 [2024-11-20 08:45:01.905858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:30.995 [2024-11-20 08:45:01.905950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:30.995 [2024-11-20 08:45:01.905986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:30.995 [2024-11-20 08:45:01.906139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:30.995 [2024-11-20 08:45:01.906179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:30.995 [2024-11-20 08:45:01.906494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:30.995 [2024-11-20 08:45:01.906698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:30.995 [2024-11-20 08:45:01.906722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:30.995 [2024-11-20 08:45:01.906889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.256 pt3 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.256 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.257 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.257 "name": "raid_bdev1", 00:11:31.257 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:31.257 "strip_size_kb": 64, 00:11:31.257 "state": "online", 00:11:31.257 "raid_level": "concat", 00:11:31.257 "superblock": true, 00:11:31.257 "num_base_bdevs": 3, 00:11:31.257 "num_base_bdevs_discovered": 3, 00:11:31.257 "num_base_bdevs_operational": 3, 00:11:31.257 "base_bdevs_list": [ 00:11:31.257 { 00:11:31.257 "name": "pt1", 00:11:31.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.257 "is_configured": true, 00:11:31.257 "data_offset": 2048, 00:11:31.257 "data_size": 63488 00:11:31.257 }, 00:11:31.257 { 00:11:31.257 "name": "pt2", 00:11:31.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.257 "is_configured": true, 00:11:31.257 "data_offset": 2048, 00:11:31.257 "data_size": 63488 00:11:31.257 }, 00:11:31.257 { 00:11:31.257 "name": "pt3", 00:11:31.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.257 "is_configured": true, 00:11:31.257 "data_offset": 2048, 00:11:31.257 "data_size": 63488 00:11:31.257 } 00:11:31.257 ] 00:11:31.257 }' 00:11:31.257 08:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.257 08:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.515 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.515 [2024-11-20 08:45:02.413700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.775 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.775 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:31.775 "name": "raid_bdev1", 00:11:31.775 "aliases": [ 00:11:31.775 "985154e0-0060-44ba-ae3b-22cab62bcc92" 00:11:31.775 ], 00:11:31.775 "product_name": "Raid Volume", 00:11:31.775 "block_size": 512, 00:11:31.775 "num_blocks": 190464, 00:11:31.775 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:31.775 "assigned_rate_limits": { 00:11:31.775 "rw_ios_per_sec": 0, 00:11:31.775 "rw_mbytes_per_sec": 0, 00:11:31.775 "r_mbytes_per_sec": 0, 00:11:31.775 "w_mbytes_per_sec": 0 00:11:31.775 }, 00:11:31.775 "claimed": false, 00:11:31.775 "zoned": false, 00:11:31.775 "supported_io_types": { 00:11:31.775 "read": true, 00:11:31.775 "write": true, 00:11:31.775 "unmap": true, 00:11:31.775 "flush": true, 00:11:31.775 "reset": true, 00:11:31.775 "nvme_admin": false, 00:11:31.775 "nvme_io": false, 00:11:31.775 "nvme_io_md": false, 00:11:31.775 "write_zeroes": true, 00:11:31.775 "zcopy": false, 00:11:31.775 "get_zone_info": false, 00:11:31.775 "zone_management": false, 00:11:31.775 "zone_append": false, 00:11:31.775 "compare": false, 00:11:31.775 "compare_and_write": false, 00:11:31.775 "abort": false, 00:11:31.775 "seek_hole": false, 00:11:31.775 "seek_data": false, 00:11:31.775 "copy": false, 00:11:31.775 "nvme_iov_md": false 00:11:31.775 }, 00:11:31.775 "memory_domains": [ 00:11:31.775 { 00:11:31.775 "dma_device_id": "system", 00:11:31.775 "dma_device_type": 1 00:11:31.775 }, 00:11:31.775 { 00:11:31.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.775 "dma_device_type": 2 00:11:31.775 }, 00:11:31.775 { 00:11:31.775 "dma_device_id": "system", 00:11:31.775 "dma_device_type": 1 00:11:31.775 }, 00:11:31.775 { 00:11:31.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.775 "dma_device_type": 2 00:11:31.775 }, 00:11:31.775 { 00:11:31.775 "dma_device_id": "system", 00:11:31.775 "dma_device_type": 1 00:11:31.775 }, 00:11:31.775 { 00:11:31.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.775 "dma_device_type": 2 00:11:31.775 } 00:11:31.775 ], 00:11:31.775 "driver_specific": { 00:11:31.775 "raid": { 00:11:31.775 "uuid": "985154e0-0060-44ba-ae3b-22cab62bcc92", 00:11:31.775 "strip_size_kb": 64, 00:11:31.775 "state": "online", 00:11:31.775 "raid_level": "concat", 00:11:31.775 "superblock": true, 00:11:31.775 "num_base_bdevs": 3, 00:11:31.775 "num_base_bdevs_discovered": 3, 00:11:31.775 "num_base_bdevs_operational": 3, 00:11:31.775 "base_bdevs_list": [ 00:11:31.775 { 00:11:31.775 "name": "pt1", 00:11:31.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.775 "is_configured": true, 00:11:31.775 "data_offset": 2048, 00:11:31.775 "data_size": 63488 00:11:31.775 }, 00:11:31.775 { 00:11:31.775 "name": "pt2", 00:11:31.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.776 "is_configured": true, 00:11:31.776 "data_offset": 2048, 00:11:31.776 "data_size": 63488 00:11:31.776 }, 00:11:31.776 { 00:11:31.776 "name": "pt3", 00:11:31.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.776 "is_configured": true, 00:11:31.776 "data_offset": 2048, 00:11:31.776 "data_size": 63488 00:11:31.776 } 00:11:31.776 ] 00:11:31.776 } 00:11:31.776 } 00:11:31.776 }' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:31.776 pt2 00:11:31.776 pt3' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.776 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:32.036 [2024-11-20 08:45:02.733749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 985154e0-0060-44ba-ae3b-22cab62bcc92 '!=' 985154e0-0060-44ba-ae3b-22cab62bcc92 ']' 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66883 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66883 ']' 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66883 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66883 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.036 killing process with pid 66883 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66883' 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66883 00:11:32.036 [2024-11-20 08:45:02.817898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.036 08:45:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66883 00:11:32.036 [2024-11-20 08:45:02.818031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.036 [2024-11-20 08:45:02.818121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.036 [2024-11-20 08:45:02.818159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:32.295 [2024-11-20 08:45:03.092971] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.232 08:45:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:33.232 00:11:33.232 real 0m5.671s 00:11:33.232 user 0m8.582s 00:11:33.232 sys 0m0.818s 00:11:33.232 08:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.232 08:45:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.233 ************************************ 00:11:33.233 END TEST raid_superblock_test 00:11:33.233 ************************************ 00:11:33.491 08:45:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:33.491 08:45:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:33.491 08:45:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.491 08:45:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.491 ************************************ 00:11:33.491 START TEST raid_read_error_test 00:11:33.491 ************************************ 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z3ti5jqcUM 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67146 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67146 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67146 ']' 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.491 08:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.491 [2024-11-20 08:45:04.285722] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:33.491 [2024-11-20 08:45:04.285918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67146 ] 00:11:33.749 [2024-11-20 08:45:04.483391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.749 [2024-11-20 08:45:04.634338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.008 [2024-11-20 08:45:04.837678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.008 [2024-11-20 08:45:04.837722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.597 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.597 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:34.597 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.597 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:34.597 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 BaseBdev1_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 true 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 [2024-11-20 08:45:05.314024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:34.598 [2024-11-20 08:45:05.314100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.598 [2024-11-20 08:45:05.314133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:34.598 [2024-11-20 08:45:05.314167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.598 [2024-11-20 08:45:05.317173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.598 [2024-11-20 08:45:05.317215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:34.598 BaseBdev1 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 BaseBdev2_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 true 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 [2024-11-20 08:45:05.382424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:34.598 [2024-11-20 08:45:05.382510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.598 [2024-11-20 08:45:05.382544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:34.598 [2024-11-20 08:45:05.382574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.598 [2024-11-20 08:45:05.385636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.598 [2024-11-20 08:45:05.385686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:34.598 BaseBdev2 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 BaseBdev3_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 true 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 [2024-11-20 08:45:05.465448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:34.598 [2024-11-20 08:45:05.465529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.598 [2024-11-20 08:45:05.465557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:34.598 [2024-11-20 08:45:05.465575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.598 [2024-11-20 08:45:05.468505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.598 [2024-11-20 08:45:05.468555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:34.598 BaseBdev3 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 [2024-11-20 08:45:05.477540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.598 [2024-11-20 08:45:05.479942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.598 [2024-11-20 08:45:05.480062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.598 [2024-11-20 08:45:05.480345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:34.598 [2024-11-20 08:45:05.480375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:34.598 [2024-11-20 08:45:05.480720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:34.598 [2024-11-20 08:45:05.480938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:34.598 [2024-11-20 08:45:05.480974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:34.598 [2024-11-20 08:45:05.481188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.598 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.856 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.856 "name": "raid_bdev1", 00:11:34.856 "uuid": "c0d17f20-a92a-47ac-b2e1-6a124c54464c", 00:11:34.856 "strip_size_kb": 64, 00:11:34.856 "state": "online", 00:11:34.856 "raid_level": "concat", 00:11:34.856 "superblock": true, 00:11:34.856 "num_base_bdevs": 3, 00:11:34.856 "num_base_bdevs_discovered": 3, 00:11:34.856 "num_base_bdevs_operational": 3, 00:11:34.856 "base_bdevs_list": [ 00:11:34.856 { 00:11:34.856 "name": "BaseBdev1", 00:11:34.856 "uuid": "5d135f77-0ab0-5264-833e-deba87659d22", 00:11:34.856 "is_configured": true, 00:11:34.856 "data_offset": 2048, 00:11:34.856 "data_size": 63488 00:11:34.856 }, 00:11:34.856 { 00:11:34.856 "name": "BaseBdev2", 00:11:34.856 "uuid": "ba9428d5-d155-5f67-be24-674e83e95d8f", 00:11:34.856 "is_configured": true, 00:11:34.856 "data_offset": 2048, 00:11:34.857 "data_size": 63488 00:11:34.857 }, 00:11:34.857 { 00:11:34.857 "name": "BaseBdev3", 00:11:34.857 "uuid": "b50df3ae-193a-50f9-8e0d-d9df94a72f6c", 00:11:34.857 "is_configured": true, 00:11:34.857 "data_offset": 2048, 00:11:34.857 "data_size": 63488 00:11:34.857 } 00:11:34.857 ] 00:11:34.857 }' 00:11:34.857 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.857 08:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.115 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:35.115 08:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:35.374 [2024-11-20 08:45:06.099071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.311 08:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.311 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.311 08:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.311 "name": "raid_bdev1", 00:11:36.311 "uuid": "c0d17f20-a92a-47ac-b2e1-6a124c54464c", 00:11:36.311 "strip_size_kb": 64, 00:11:36.311 "state": "online", 00:11:36.311 "raid_level": "concat", 00:11:36.311 "superblock": true, 00:11:36.311 "num_base_bdevs": 3, 00:11:36.311 "num_base_bdevs_discovered": 3, 00:11:36.311 "num_base_bdevs_operational": 3, 00:11:36.311 "base_bdevs_list": [ 00:11:36.311 { 00:11:36.311 "name": "BaseBdev1", 00:11:36.311 "uuid": "5d135f77-0ab0-5264-833e-deba87659d22", 00:11:36.311 "is_configured": true, 00:11:36.311 "data_offset": 2048, 00:11:36.311 "data_size": 63488 00:11:36.311 }, 00:11:36.311 { 00:11:36.311 "name": "BaseBdev2", 00:11:36.311 "uuid": "ba9428d5-d155-5f67-be24-674e83e95d8f", 00:11:36.311 "is_configured": true, 00:11:36.311 "data_offset": 2048, 00:11:36.311 "data_size": 63488 00:11:36.311 }, 00:11:36.311 { 00:11:36.311 "name": "BaseBdev3", 00:11:36.312 "uuid": "b50df3ae-193a-50f9-8e0d-d9df94a72f6c", 00:11:36.312 "is_configured": true, 00:11:36.312 "data_offset": 2048, 00:11:36.312 "data_size": 63488 00:11:36.312 } 00:11:36.312 ] 00:11:36.312 }' 00:11:36.312 08:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.312 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.570 08:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.570 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.570 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.828 [2024-11-20 08:45:07.485657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.828 [2024-11-20 08:45:07.485695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.828 [2024-11-20 08:45:07.489013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.828 [2024-11-20 08:45:07.489077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.828 [2024-11-20 08:45:07.489130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.828 [2024-11-20 08:45:07.489165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:36.828 { 00:11:36.828 "results": [ 00:11:36.828 { 00:11:36.828 "job": "raid_bdev1", 00:11:36.828 "core_mask": "0x1", 00:11:36.828 "workload": "randrw", 00:11:36.828 "percentage": 50, 00:11:36.828 "status": "finished", 00:11:36.828 "queue_depth": 1, 00:11:36.828 "io_size": 131072, 00:11:36.828 "runtime": 1.384189, 00:11:36.828 "iops": 10643.777692208218, 00:11:36.828 "mibps": 1330.4722115260272, 00:11:36.828 "io_failed": 1, 00:11:36.828 "io_timeout": 0, 00:11:36.828 "avg_latency_us": 131.19539568345323, 00:11:36.828 "min_latency_us": 42.589090909090906, 00:11:36.828 "max_latency_us": 1854.370909090909 00:11:36.828 } 00:11:36.828 ], 00:11:36.828 "core_count": 1 00:11:36.828 } 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67146 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67146 ']' 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67146 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67146 00:11:36.828 killing process with pid 67146 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67146' 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67146 00:11:36.828 [2024-11-20 08:45:07.529488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.828 08:45:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67146 00:11:37.087 [2024-11-20 08:45:07.742684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z3ti5jqcUM 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:38.025 ************************************ 00:11:38.025 END TEST raid_read_error_test 00:11:38.025 ************************************ 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:38.025 00:11:38.025 real 0m4.686s 00:11:38.025 user 0m5.770s 00:11:38.025 sys 0m0.587s 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.025 08:45:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 08:45:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:38.025 08:45:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:38.025 08:45:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.025 08:45:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 ************************************ 00:11:38.025 START TEST raid_write_error_test 00:11:38.025 ************************************ 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uzB43nSehS 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67292 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67292 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67292 ']' 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.025 08:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.284 [2024-11-20 08:45:09.027053] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:38.284 [2024-11-20 08:45:09.027268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67292 ] 00:11:38.541 [2024-11-20 08:45:09.209097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.541 [2024-11-20 08:45:09.339219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.798 [2024-11-20 08:45:09.542412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.798 [2024-11-20 08:45:09.542763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 BaseBdev1_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 true 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 [2024-11-20 08:45:10.091590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:39.366 [2024-11-20 08:45:10.091692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.366 [2024-11-20 08:45:10.091722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:39.366 [2024-11-20 08:45:10.091740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.366 [2024-11-20 08:45:10.094575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.366 [2024-11-20 08:45:10.094626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.366 BaseBdev1 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 BaseBdev2_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 true 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 [2024-11-20 08:45:10.151705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:39.366 [2024-11-20 08:45:10.151961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.366 [2024-11-20 08:45:10.152016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:39.366 [2024-11-20 08:45:10.152051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.366 [2024-11-20 08:45:10.154921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.366 [2024-11-20 08:45:10.155105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.366 BaseBdev2 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 BaseBdev3_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 true 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.366 [2024-11-20 08:45:10.229281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:39.366 [2024-11-20 08:45:10.229541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.366 [2024-11-20 08:45:10.229605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:39.366 [2024-11-20 08:45:10.229639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.366 [2024-11-20 08:45:10.232794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.366 [2024-11-20 08:45:10.232992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:39.366 BaseBdev3 00:11:39.366 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.367 [2024-11-20 08:45:10.241526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.367 [2024-11-20 08:45:10.244172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.367 [2024-11-20 08:45:10.244289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.367 [2024-11-20 08:45:10.244579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:39.367 [2024-11-20 08:45:10.244598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:39.367 [2024-11-20 08:45:10.244953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:39.367 [2024-11-20 08:45:10.245204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:39.367 [2024-11-20 08:45:10.245240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:39.367 [2024-11-20 08:45:10.245638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.367 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.625 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.625 "name": "raid_bdev1", 00:11:39.625 "uuid": "153afea2-3ef3-44ba-9ef1-4c494bb2750d", 00:11:39.625 "strip_size_kb": 64, 00:11:39.625 "state": "online", 00:11:39.625 "raid_level": "concat", 00:11:39.625 "superblock": true, 00:11:39.625 "num_base_bdevs": 3, 00:11:39.625 "num_base_bdevs_discovered": 3, 00:11:39.625 "num_base_bdevs_operational": 3, 00:11:39.625 "base_bdevs_list": [ 00:11:39.625 { 00:11:39.626 "name": "BaseBdev1", 00:11:39.626 "uuid": "c00bbfeb-9a8d-52b8-9e40-18eb54a5b684", 00:11:39.626 "is_configured": true, 00:11:39.626 "data_offset": 2048, 00:11:39.626 "data_size": 63488 00:11:39.626 }, 00:11:39.626 { 00:11:39.626 "name": "BaseBdev2", 00:11:39.626 "uuid": "85b01896-f595-5370-a2af-d218d3a7dedb", 00:11:39.626 "is_configured": true, 00:11:39.626 "data_offset": 2048, 00:11:39.626 "data_size": 63488 00:11:39.626 }, 00:11:39.626 { 00:11:39.626 "name": "BaseBdev3", 00:11:39.626 "uuid": "49c2be0a-1596-58a9-a3bb-a16354c5f4eb", 00:11:39.626 "is_configured": true, 00:11:39.626 "data_offset": 2048, 00:11:39.626 "data_size": 63488 00:11:39.626 } 00:11:39.626 ] 00:11:39.626 }' 00:11:39.626 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.626 08:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.884 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:39.884 08:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.142 [2024-11-20 08:45:10.935047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:41.077 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.078 "name": "raid_bdev1", 00:11:41.078 "uuid": "153afea2-3ef3-44ba-9ef1-4c494bb2750d", 00:11:41.078 "strip_size_kb": 64, 00:11:41.078 "state": "online", 00:11:41.078 "raid_level": "concat", 00:11:41.078 "superblock": true, 00:11:41.078 "num_base_bdevs": 3, 00:11:41.078 "num_base_bdevs_discovered": 3, 00:11:41.078 "num_base_bdevs_operational": 3, 00:11:41.078 "base_bdevs_list": [ 00:11:41.078 { 00:11:41.078 "name": "BaseBdev1", 00:11:41.078 "uuid": "c00bbfeb-9a8d-52b8-9e40-18eb54a5b684", 00:11:41.078 "is_configured": true, 00:11:41.078 "data_offset": 2048, 00:11:41.078 "data_size": 63488 00:11:41.078 }, 00:11:41.078 { 00:11:41.078 "name": "BaseBdev2", 00:11:41.078 "uuid": "85b01896-f595-5370-a2af-d218d3a7dedb", 00:11:41.078 "is_configured": true, 00:11:41.078 "data_offset": 2048, 00:11:41.078 "data_size": 63488 00:11:41.078 }, 00:11:41.078 { 00:11:41.078 "name": "BaseBdev3", 00:11:41.078 "uuid": "49c2be0a-1596-58a9-a3bb-a16354c5f4eb", 00:11:41.078 "is_configured": true, 00:11:41.078 "data_offset": 2048, 00:11:41.078 "data_size": 63488 00:11:41.078 } 00:11:41.078 ] 00:11:41.078 }' 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.078 08:45:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.646 [2024-11-20 08:45:12.293283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.646 [2024-11-20 08:45:12.293461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.646 [2024-11-20 08:45:12.296865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.646 [2024-11-20 08:45:12.297044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.646 [2024-11-20 08:45:12.297158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.646 { 00:11:41.646 "results": [ 00:11:41.646 { 00:11:41.646 "job": "raid_bdev1", 00:11:41.646 "core_mask": "0x1", 00:11:41.646 "workload": "randrw", 00:11:41.646 "percentage": 50, 00:11:41.646 "status": "finished", 00:11:41.646 "queue_depth": 1, 00:11:41.646 "io_size": 131072, 00:11:41.646 "runtime": 1.355963, 00:11:41.646 "iops": 10568.872454484377, 00:11:41.646 "mibps": 1321.1090568105471, 00:11:41.646 "io_failed": 1, 00:11:41.646 "io_timeout": 0, 00:11:41.646 "avg_latency_us": 132.17417603328852, 00:11:41.646 "min_latency_us": 38.63272727272727, 00:11:41.646 "max_latency_us": 2234.181818181818 00:11:41.646 } 00:11:41.646 ], 00:11:41.646 "core_count": 1 00:11:41.646 } 00:11:41.646 [2024-11-20 08:45:12.297378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67292 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67292 ']' 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67292 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67292 00:11:41.646 killing process with pid 67292 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67292' 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67292 00:11:41.646 [2024-11-20 08:45:12.332263] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.646 08:45:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67292 00:11:41.646 [2024-11-20 08:45:12.543500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uzB43nSehS 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:43.022 00:11:43.022 real 0m4.729s 00:11:43.022 user 0m5.900s 00:11:43.022 sys 0m0.577s 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.022 08:45:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.022 ************************************ 00:11:43.022 END TEST raid_write_error_test 00:11:43.022 ************************************ 00:11:43.022 08:45:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:43.022 08:45:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:43.022 08:45:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.022 08:45:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.022 08:45:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.022 ************************************ 00:11:43.022 START TEST raid_state_function_test 00:11:43.022 ************************************ 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.022 Process raid pid: 67431 00:11:43.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67431 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67431' 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67431 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67431 ']' 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.022 08:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.022 [2024-11-20 08:45:13.802045] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:43.022 [2024-11-20 08:45:13.802507] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.331 [2024-11-20 08:45:13.981745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.331 [2024-11-20 08:45:14.114138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.590 [2024-11-20 08:45:14.364049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.590 [2024-11-20 08:45:14.364142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.847 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.847 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:43.847 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:43.847 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.847 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.847 [2024-11-20 08:45:14.746792] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.847 [2024-11-20 08:45:14.746862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.847 [2024-11-20 08:45:14.746880] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.848 [2024-11-20 08:45:14.746898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.848 [2024-11-20 08:45:14.746909] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.848 [2024-11-20 08:45:14.746923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.848 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.106 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.106 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.106 "name": "Existed_Raid", 00:11:44.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.106 "strip_size_kb": 0, 00:11:44.106 "state": "configuring", 00:11:44.106 "raid_level": "raid1", 00:11:44.106 "superblock": false, 00:11:44.106 "num_base_bdevs": 3, 00:11:44.106 "num_base_bdevs_discovered": 0, 00:11:44.106 "num_base_bdevs_operational": 3, 00:11:44.106 "base_bdevs_list": [ 00:11:44.106 { 00:11:44.106 "name": "BaseBdev1", 00:11:44.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.106 "is_configured": false, 00:11:44.106 "data_offset": 0, 00:11:44.106 "data_size": 0 00:11:44.106 }, 00:11:44.106 { 00:11:44.106 "name": "BaseBdev2", 00:11:44.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.106 "is_configured": false, 00:11:44.106 "data_offset": 0, 00:11:44.106 "data_size": 0 00:11:44.106 }, 00:11:44.106 { 00:11:44.106 "name": "BaseBdev3", 00:11:44.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.106 "is_configured": false, 00:11:44.106 "data_offset": 0, 00:11:44.106 "data_size": 0 00:11:44.106 } 00:11:44.106 ] 00:11:44.106 }' 00:11:44.106 08:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.106 08:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.364 [2024-11-20 08:45:15.262889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.364 [2024-11-20 08:45:15.263084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.364 [2024-11-20 08:45:15.270869] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.364 [2024-11-20 08:45:15.270933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.364 [2024-11-20 08:45:15.270950] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.364 [2024-11-20 08:45:15.270967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.364 [2024-11-20 08:45:15.270977] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.364 [2024-11-20 08:45:15.270992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.364 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.623 [2024-11-20 08:45:15.316681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.623 BaseBdev1 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.623 [ 00:11:44.623 { 00:11:44.623 "name": "BaseBdev1", 00:11:44.623 "aliases": [ 00:11:44.623 "b8d005fe-d487-436c-adc5-170615c47959" 00:11:44.623 ], 00:11:44.623 "product_name": "Malloc disk", 00:11:44.623 "block_size": 512, 00:11:44.623 "num_blocks": 65536, 00:11:44.623 "uuid": "b8d005fe-d487-436c-adc5-170615c47959", 00:11:44.623 "assigned_rate_limits": { 00:11:44.623 "rw_ios_per_sec": 0, 00:11:44.623 "rw_mbytes_per_sec": 0, 00:11:44.623 "r_mbytes_per_sec": 0, 00:11:44.623 "w_mbytes_per_sec": 0 00:11:44.623 }, 00:11:44.623 "claimed": true, 00:11:44.623 "claim_type": "exclusive_write", 00:11:44.623 "zoned": false, 00:11:44.623 "supported_io_types": { 00:11:44.623 "read": true, 00:11:44.623 "write": true, 00:11:44.623 "unmap": true, 00:11:44.623 "flush": true, 00:11:44.623 "reset": true, 00:11:44.623 "nvme_admin": false, 00:11:44.623 "nvme_io": false, 00:11:44.623 "nvme_io_md": false, 00:11:44.623 "write_zeroes": true, 00:11:44.623 "zcopy": true, 00:11:44.623 "get_zone_info": false, 00:11:44.623 "zone_management": false, 00:11:44.623 "zone_append": false, 00:11:44.623 "compare": false, 00:11:44.623 "compare_and_write": false, 00:11:44.623 "abort": true, 00:11:44.623 "seek_hole": false, 00:11:44.623 "seek_data": false, 00:11:44.623 "copy": true, 00:11:44.623 "nvme_iov_md": false 00:11:44.623 }, 00:11:44.623 "memory_domains": [ 00:11:44.623 { 00:11:44.623 "dma_device_id": "system", 00:11:44.623 "dma_device_type": 1 00:11:44.623 }, 00:11:44.623 { 00:11:44.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.623 "dma_device_type": 2 00:11:44.623 } 00:11:44.623 ], 00:11:44.623 "driver_specific": {} 00:11:44.623 } 00:11:44.623 ] 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.623 "name": "Existed_Raid", 00:11:44.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.623 "strip_size_kb": 0, 00:11:44.623 "state": "configuring", 00:11:44.623 "raid_level": "raid1", 00:11:44.623 "superblock": false, 00:11:44.623 "num_base_bdevs": 3, 00:11:44.623 "num_base_bdevs_discovered": 1, 00:11:44.623 "num_base_bdevs_operational": 3, 00:11:44.623 "base_bdevs_list": [ 00:11:44.623 { 00:11:44.623 "name": "BaseBdev1", 00:11:44.623 "uuid": "b8d005fe-d487-436c-adc5-170615c47959", 00:11:44.623 "is_configured": true, 00:11:44.623 "data_offset": 0, 00:11:44.623 "data_size": 65536 00:11:44.623 }, 00:11:44.623 { 00:11:44.623 "name": "BaseBdev2", 00:11:44.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.623 "is_configured": false, 00:11:44.623 "data_offset": 0, 00:11:44.623 "data_size": 0 00:11:44.623 }, 00:11:44.623 { 00:11:44.623 "name": "BaseBdev3", 00:11:44.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.623 "is_configured": false, 00:11:44.623 "data_offset": 0, 00:11:44.623 "data_size": 0 00:11:44.623 } 00:11:44.623 ] 00:11:44.623 }' 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.623 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.192 [2024-11-20 08:45:15.896887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.192 [2024-11-20 08:45:15.896950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.192 [2024-11-20 08:45:15.904930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.192 [2024-11-20 08:45:15.907377] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.192 [2024-11-20 08:45:15.907566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.192 [2024-11-20 08:45:15.907608] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.192 [2024-11-20 08:45:15.907629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.192 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.192 "name": "Existed_Raid", 00:11:45.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.192 "strip_size_kb": 0, 00:11:45.192 "state": "configuring", 00:11:45.192 "raid_level": "raid1", 00:11:45.192 "superblock": false, 00:11:45.192 "num_base_bdevs": 3, 00:11:45.192 "num_base_bdevs_discovered": 1, 00:11:45.193 "num_base_bdevs_operational": 3, 00:11:45.193 "base_bdevs_list": [ 00:11:45.193 { 00:11:45.193 "name": "BaseBdev1", 00:11:45.193 "uuid": "b8d005fe-d487-436c-adc5-170615c47959", 00:11:45.193 "is_configured": true, 00:11:45.193 "data_offset": 0, 00:11:45.193 "data_size": 65536 00:11:45.193 }, 00:11:45.193 { 00:11:45.193 "name": "BaseBdev2", 00:11:45.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.193 "is_configured": false, 00:11:45.193 "data_offset": 0, 00:11:45.193 "data_size": 0 00:11:45.193 }, 00:11:45.193 { 00:11:45.193 "name": "BaseBdev3", 00:11:45.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.193 "is_configured": false, 00:11:45.193 "data_offset": 0, 00:11:45.193 "data_size": 0 00:11:45.193 } 00:11:45.193 ] 00:11:45.193 }' 00:11:45.193 08:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.193 08:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.760 [2024-11-20 08:45:16.520009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.760 BaseBdev2 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.760 [ 00:11:45.760 { 00:11:45.760 "name": "BaseBdev2", 00:11:45.760 "aliases": [ 00:11:45.760 "850c647a-72d9-4fa6-87fa-f72723a1d98e" 00:11:45.760 ], 00:11:45.760 "product_name": "Malloc disk", 00:11:45.760 "block_size": 512, 00:11:45.760 "num_blocks": 65536, 00:11:45.760 "uuid": "850c647a-72d9-4fa6-87fa-f72723a1d98e", 00:11:45.760 "assigned_rate_limits": { 00:11:45.760 "rw_ios_per_sec": 0, 00:11:45.760 "rw_mbytes_per_sec": 0, 00:11:45.760 "r_mbytes_per_sec": 0, 00:11:45.760 "w_mbytes_per_sec": 0 00:11:45.760 }, 00:11:45.760 "claimed": true, 00:11:45.760 "claim_type": "exclusive_write", 00:11:45.760 "zoned": false, 00:11:45.760 "supported_io_types": { 00:11:45.760 "read": true, 00:11:45.760 "write": true, 00:11:45.760 "unmap": true, 00:11:45.760 "flush": true, 00:11:45.760 "reset": true, 00:11:45.760 "nvme_admin": false, 00:11:45.760 "nvme_io": false, 00:11:45.760 "nvme_io_md": false, 00:11:45.760 "write_zeroes": true, 00:11:45.760 "zcopy": true, 00:11:45.760 "get_zone_info": false, 00:11:45.760 "zone_management": false, 00:11:45.760 "zone_append": false, 00:11:45.760 "compare": false, 00:11:45.760 "compare_and_write": false, 00:11:45.760 "abort": true, 00:11:45.760 "seek_hole": false, 00:11:45.760 "seek_data": false, 00:11:45.760 "copy": true, 00:11:45.760 "nvme_iov_md": false 00:11:45.760 }, 00:11:45.760 "memory_domains": [ 00:11:45.760 { 00:11:45.760 "dma_device_id": "system", 00:11:45.760 "dma_device_type": 1 00:11:45.760 }, 00:11:45.760 { 00:11:45.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.760 "dma_device_type": 2 00:11:45.760 } 00:11:45.760 ], 00:11:45.760 "driver_specific": {} 00:11:45.760 } 00:11:45.760 ] 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.760 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.761 "name": "Existed_Raid", 00:11:45.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.761 "strip_size_kb": 0, 00:11:45.761 "state": "configuring", 00:11:45.761 "raid_level": "raid1", 00:11:45.761 "superblock": false, 00:11:45.761 "num_base_bdevs": 3, 00:11:45.761 "num_base_bdevs_discovered": 2, 00:11:45.761 "num_base_bdevs_operational": 3, 00:11:45.761 "base_bdevs_list": [ 00:11:45.761 { 00:11:45.761 "name": "BaseBdev1", 00:11:45.761 "uuid": "b8d005fe-d487-436c-adc5-170615c47959", 00:11:45.761 "is_configured": true, 00:11:45.761 "data_offset": 0, 00:11:45.761 "data_size": 65536 00:11:45.761 }, 00:11:45.761 { 00:11:45.761 "name": "BaseBdev2", 00:11:45.761 "uuid": "850c647a-72d9-4fa6-87fa-f72723a1d98e", 00:11:45.761 "is_configured": true, 00:11:45.761 "data_offset": 0, 00:11:45.761 "data_size": 65536 00:11:45.761 }, 00:11:45.761 { 00:11:45.761 "name": "BaseBdev3", 00:11:45.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.761 "is_configured": false, 00:11:45.761 "data_offset": 0, 00:11:45.761 "data_size": 0 00:11:45.761 } 00:11:45.761 ] 00:11:45.761 }' 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.761 08:45:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.329 [2024-11-20 08:45:17.145858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.329 [2024-11-20 08:45:17.146130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.329 [2024-11-20 08:45:17.146200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:46.329 [2024-11-20 08:45:17.146586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.329 [2024-11-20 08:45:17.146824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.329 [2024-11-20 08:45:17.146842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:46.329 [2024-11-20 08:45:17.147213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.329 BaseBdev3 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.329 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.329 [ 00:11:46.329 { 00:11:46.329 "name": "BaseBdev3", 00:11:46.329 "aliases": [ 00:11:46.329 "a768e28e-aaef-4a5f-8f58-e0fe3501f9ba" 00:11:46.329 ], 00:11:46.329 "product_name": "Malloc disk", 00:11:46.329 "block_size": 512, 00:11:46.329 "num_blocks": 65536, 00:11:46.329 "uuid": "a768e28e-aaef-4a5f-8f58-e0fe3501f9ba", 00:11:46.329 "assigned_rate_limits": { 00:11:46.329 "rw_ios_per_sec": 0, 00:11:46.329 "rw_mbytes_per_sec": 0, 00:11:46.329 "r_mbytes_per_sec": 0, 00:11:46.329 "w_mbytes_per_sec": 0 00:11:46.329 }, 00:11:46.329 "claimed": true, 00:11:46.329 "claim_type": "exclusive_write", 00:11:46.329 "zoned": false, 00:11:46.329 "supported_io_types": { 00:11:46.329 "read": true, 00:11:46.329 "write": true, 00:11:46.329 "unmap": true, 00:11:46.329 "flush": true, 00:11:46.329 "reset": true, 00:11:46.329 "nvme_admin": false, 00:11:46.329 "nvme_io": false, 00:11:46.329 "nvme_io_md": false, 00:11:46.329 "write_zeroes": true, 00:11:46.329 "zcopy": true, 00:11:46.329 "get_zone_info": false, 00:11:46.329 "zone_management": false, 00:11:46.329 "zone_append": false, 00:11:46.329 "compare": false, 00:11:46.329 "compare_and_write": false, 00:11:46.329 "abort": true, 00:11:46.329 "seek_hole": false, 00:11:46.329 "seek_data": false, 00:11:46.329 "copy": true, 00:11:46.329 "nvme_iov_md": false 00:11:46.329 }, 00:11:46.329 "memory_domains": [ 00:11:46.329 { 00:11:46.329 "dma_device_id": "system", 00:11:46.330 "dma_device_type": 1 00:11:46.330 }, 00:11:46.330 { 00:11:46.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.330 "dma_device_type": 2 00:11:46.330 } 00:11:46.330 ], 00:11:46.330 "driver_specific": {} 00:11:46.330 } 00:11:46.330 ] 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.330 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.589 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.589 "name": "Existed_Raid", 00:11:46.589 "uuid": "ee1bb014-174a-4c81-af18-a6bee867512b", 00:11:46.589 "strip_size_kb": 0, 00:11:46.589 "state": "online", 00:11:46.589 "raid_level": "raid1", 00:11:46.589 "superblock": false, 00:11:46.589 "num_base_bdevs": 3, 00:11:46.589 "num_base_bdevs_discovered": 3, 00:11:46.589 "num_base_bdevs_operational": 3, 00:11:46.589 "base_bdevs_list": [ 00:11:46.589 { 00:11:46.589 "name": "BaseBdev1", 00:11:46.589 "uuid": "b8d005fe-d487-436c-adc5-170615c47959", 00:11:46.589 "is_configured": true, 00:11:46.589 "data_offset": 0, 00:11:46.589 "data_size": 65536 00:11:46.589 }, 00:11:46.589 { 00:11:46.589 "name": "BaseBdev2", 00:11:46.589 "uuid": "850c647a-72d9-4fa6-87fa-f72723a1d98e", 00:11:46.589 "is_configured": true, 00:11:46.589 "data_offset": 0, 00:11:46.589 "data_size": 65536 00:11:46.589 }, 00:11:46.589 { 00:11:46.589 "name": "BaseBdev3", 00:11:46.589 "uuid": "a768e28e-aaef-4a5f-8f58-e0fe3501f9ba", 00:11:46.589 "is_configured": true, 00:11:46.589 "data_offset": 0, 00:11:46.589 "data_size": 65536 00:11:46.589 } 00:11:46.589 ] 00:11:46.589 }' 00:11:46.589 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.589 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.848 [2024-11-20 08:45:17.734472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.848 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.107 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.107 "name": "Existed_Raid", 00:11:47.107 "aliases": [ 00:11:47.107 "ee1bb014-174a-4c81-af18-a6bee867512b" 00:11:47.107 ], 00:11:47.107 "product_name": "Raid Volume", 00:11:47.107 "block_size": 512, 00:11:47.107 "num_blocks": 65536, 00:11:47.107 "uuid": "ee1bb014-174a-4c81-af18-a6bee867512b", 00:11:47.107 "assigned_rate_limits": { 00:11:47.107 "rw_ios_per_sec": 0, 00:11:47.107 "rw_mbytes_per_sec": 0, 00:11:47.107 "r_mbytes_per_sec": 0, 00:11:47.107 "w_mbytes_per_sec": 0 00:11:47.107 }, 00:11:47.107 "claimed": false, 00:11:47.108 "zoned": false, 00:11:47.108 "supported_io_types": { 00:11:47.108 "read": true, 00:11:47.108 "write": true, 00:11:47.108 "unmap": false, 00:11:47.108 "flush": false, 00:11:47.108 "reset": true, 00:11:47.108 "nvme_admin": false, 00:11:47.108 "nvme_io": false, 00:11:47.108 "nvme_io_md": false, 00:11:47.108 "write_zeroes": true, 00:11:47.108 "zcopy": false, 00:11:47.108 "get_zone_info": false, 00:11:47.108 "zone_management": false, 00:11:47.108 "zone_append": false, 00:11:47.108 "compare": false, 00:11:47.108 "compare_and_write": false, 00:11:47.108 "abort": false, 00:11:47.108 "seek_hole": false, 00:11:47.108 "seek_data": false, 00:11:47.108 "copy": false, 00:11:47.108 "nvme_iov_md": false 00:11:47.108 }, 00:11:47.108 "memory_domains": [ 00:11:47.108 { 00:11:47.108 "dma_device_id": "system", 00:11:47.108 "dma_device_type": 1 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.108 "dma_device_type": 2 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "dma_device_id": "system", 00:11:47.108 "dma_device_type": 1 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.108 "dma_device_type": 2 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "dma_device_id": "system", 00:11:47.108 "dma_device_type": 1 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.108 "dma_device_type": 2 00:11:47.108 } 00:11:47.108 ], 00:11:47.108 "driver_specific": { 00:11:47.108 "raid": { 00:11:47.108 "uuid": "ee1bb014-174a-4c81-af18-a6bee867512b", 00:11:47.108 "strip_size_kb": 0, 00:11:47.108 "state": "online", 00:11:47.108 "raid_level": "raid1", 00:11:47.108 "superblock": false, 00:11:47.108 "num_base_bdevs": 3, 00:11:47.108 "num_base_bdevs_discovered": 3, 00:11:47.108 "num_base_bdevs_operational": 3, 00:11:47.108 "base_bdevs_list": [ 00:11:47.108 { 00:11:47.108 "name": "BaseBdev1", 00:11:47.108 "uuid": "b8d005fe-d487-436c-adc5-170615c47959", 00:11:47.108 "is_configured": true, 00:11:47.108 "data_offset": 0, 00:11:47.108 "data_size": 65536 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "name": "BaseBdev2", 00:11:47.108 "uuid": "850c647a-72d9-4fa6-87fa-f72723a1d98e", 00:11:47.108 "is_configured": true, 00:11:47.108 "data_offset": 0, 00:11:47.108 "data_size": 65536 00:11:47.108 }, 00:11:47.108 { 00:11:47.108 "name": "BaseBdev3", 00:11:47.108 "uuid": "a768e28e-aaef-4a5f-8f58-e0fe3501f9ba", 00:11:47.108 "is_configured": true, 00:11:47.108 "data_offset": 0, 00:11:47.108 "data_size": 65536 00:11:47.108 } 00:11:47.108 ] 00:11:47.108 } 00:11:47.108 } 00:11:47.108 }' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.108 BaseBdev2 00:11:47.108 BaseBdev3' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.108 08:45:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.108 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.368 [2024-11-20 08:45:18.042247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.368 "name": "Existed_Raid", 00:11:47.368 "uuid": "ee1bb014-174a-4c81-af18-a6bee867512b", 00:11:47.368 "strip_size_kb": 0, 00:11:47.368 "state": "online", 00:11:47.368 "raid_level": "raid1", 00:11:47.368 "superblock": false, 00:11:47.368 "num_base_bdevs": 3, 00:11:47.368 "num_base_bdevs_discovered": 2, 00:11:47.368 "num_base_bdevs_operational": 2, 00:11:47.368 "base_bdevs_list": [ 00:11:47.368 { 00:11:47.368 "name": null, 00:11:47.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.368 "is_configured": false, 00:11:47.368 "data_offset": 0, 00:11:47.368 "data_size": 65536 00:11:47.368 }, 00:11:47.368 { 00:11:47.368 "name": "BaseBdev2", 00:11:47.368 "uuid": "850c647a-72d9-4fa6-87fa-f72723a1d98e", 00:11:47.368 "is_configured": true, 00:11:47.368 "data_offset": 0, 00:11:47.368 "data_size": 65536 00:11:47.368 }, 00:11:47.368 { 00:11:47.368 "name": "BaseBdev3", 00:11:47.368 "uuid": "a768e28e-aaef-4a5f-8f58-e0fe3501f9ba", 00:11:47.368 "is_configured": true, 00:11:47.368 "data_offset": 0, 00:11:47.368 "data_size": 65536 00:11:47.368 } 00:11:47.368 ] 00:11:47.368 }' 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.368 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.947 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.947 [2024-11-20 08:45:18.778245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.205 08:45:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.205 [2024-11-20 08:45:18.926100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.205 [2024-11-20 08:45:18.926247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.205 [2024-11-20 08:45:19.013873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.205 [2024-11-20 08:45:19.013949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.205 [2024-11-20 08:45:19.013970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.205 BaseBdev2 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.205 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.463 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.464 [ 00:11:48.464 { 00:11:48.464 "name": "BaseBdev2", 00:11:48.464 "aliases": [ 00:11:48.464 "c3a175fb-4f42-4289-aab6-a54c2273ecbc" 00:11:48.464 ], 00:11:48.464 "product_name": "Malloc disk", 00:11:48.464 "block_size": 512, 00:11:48.464 "num_blocks": 65536, 00:11:48.464 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:48.464 "assigned_rate_limits": { 00:11:48.464 "rw_ios_per_sec": 0, 00:11:48.464 "rw_mbytes_per_sec": 0, 00:11:48.464 "r_mbytes_per_sec": 0, 00:11:48.464 "w_mbytes_per_sec": 0 00:11:48.464 }, 00:11:48.464 "claimed": false, 00:11:48.464 "zoned": false, 00:11:48.464 "supported_io_types": { 00:11:48.464 "read": true, 00:11:48.464 "write": true, 00:11:48.464 "unmap": true, 00:11:48.464 "flush": true, 00:11:48.464 "reset": true, 00:11:48.464 "nvme_admin": false, 00:11:48.464 "nvme_io": false, 00:11:48.464 "nvme_io_md": false, 00:11:48.464 "write_zeroes": true, 00:11:48.464 "zcopy": true, 00:11:48.464 "get_zone_info": false, 00:11:48.464 "zone_management": false, 00:11:48.464 "zone_append": false, 00:11:48.464 "compare": false, 00:11:48.464 "compare_and_write": false, 00:11:48.464 "abort": true, 00:11:48.464 "seek_hole": false, 00:11:48.464 "seek_data": false, 00:11:48.464 "copy": true, 00:11:48.464 "nvme_iov_md": false 00:11:48.464 }, 00:11:48.464 "memory_domains": [ 00:11:48.464 { 00:11:48.464 "dma_device_id": "system", 00:11:48.464 "dma_device_type": 1 00:11:48.464 }, 00:11:48.464 { 00:11:48.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.464 "dma_device_type": 2 00:11:48.464 } 00:11:48.464 ], 00:11:48.464 "driver_specific": {} 00:11:48.464 } 00:11:48.464 ] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.464 BaseBdev3 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.464 [ 00:11:48.464 { 00:11:48.464 "name": "BaseBdev3", 00:11:48.464 "aliases": [ 00:11:48.464 "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693" 00:11:48.464 ], 00:11:48.464 "product_name": "Malloc disk", 00:11:48.464 "block_size": 512, 00:11:48.464 "num_blocks": 65536, 00:11:48.464 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:48.464 "assigned_rate_limits": { 00:11:48.464 "rw_ios_per_sec": 0, 00:11:48.464 "rw_mbytes_per_sec": 0, 00:11:48.464 "r_mbytes_per_sec": 0, 00:11:48.464 "w_mbytes_per_sec": 0 00:11:48.464 }, 00:11:48.464 "claimed": false, 00:11:48.464 "zoned": false, 00:11:48.464 "supported_io_types": { 00:11:48.464 "read": true, 00:11:48.464 "write": true, 00:11:48.464 "unmap": true, 00:11:48.464 "flush": true, 00:11:48.464 "reset": true, 00:11:48.464 "nvme_admin": false, 00:11:48.464 "nvme_io": false, 00:11:48.464 "nvme_io_md": false, 00:11:48.464 "write_zeroes": true, 00:11:48.464 "zcopy": true, 00:11:48.464 "get_zone_info": false, 00:11:48.464 "zone_management": false, 00:11:48.464 "zone_append": false, 00:11:48.464 "compare": false, 00:11:48.464 "compare_and_write": false, 00:11:48.464 "abort": true, 00:11:48.464 "seek_hole": false, 00:11:48.464 "seek_data": false, 00:11:48.464 "copy": true, 00:11:48.464 "nvme_iov_md": false 00:11:48.464 }, 00:11:48.464 "memory_domains": [ 00:11:48.464 { 00:11:48.464 "dma_device_id": "system", 00:11:48.464 "dma_device_type": 1 00:11:48.464 }, 00:11:48.464 { 00:11:48.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.464 "dma_device_type": 2 00:11:48.464 } 00:11:48.464 ], 00:11:48.464 "driver_specific": {} 00:11:48.464 } 00:11:48.464 ] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.464 [2024-11-20 08:45:19.226135] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.464 [2024-11-20 08:45:19.226377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.464 [2024-11-20 08:45:19.226520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.464 [2024-11-20 08:45:19.229117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.464 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.464 "name": "Existed_Raid", 00:11:48.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.464 "strip_size_kb": 0, 00:11:48.464 "state": "configuring", 00:11:48.464 "raid_level": "raid1", 00:11:48.464 "superblock": false, 00:11:48.464 "num_base_bdevs": 3, 00:11:48.464 "num_base_bdevs_discovered": 2, 00:11:48.464 "num_base_bdevs_operational": 3, 00:11:48.464 "base_bdevs_list": [ 00:11:48.464 { 00:11:48.464 "name": "BaseBdev1", 00:11:48.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.464 "is_configured": false, 00:11:48.464 "data_offset": 0, 00:11:48.464 "data_size": 0 00:11:48.464 }, 00:11:48.464 { 00:11:48.464 "name": "BaseBdev2", 00:11:48.464 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:48.464 "is_configured": true, 00:11:48.464 "data_offset": 0, 00:11:48.464 "data_size": 65536 00:11:48.464 }, 00:11:48.464 { 00:11:48.464 "name": "BaseBdev3", 00:11:48.464 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:48.464 "is_configured": true, 00:11:48.464 "data_offset": 0, 00:11:48.464 "data_size": 65536 00:11:48.464 } 00:11:48.464 ] 00:11:48.464 }' 00:11:48.465 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.465 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.032 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:49.032 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.032 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.032 [2024-11-20 08:45:19.770293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.032 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.032 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.032 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.033 "name": "Existed_Raid", 00:11:49.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.033 "strip_size_kb": 0, 00:11:49.033 "state": "configuring", 00:11:49.033 "raid_level": "raid1", 00:11:49.033 "superblock": false, 00:11:49.033 "num_base_bdevs": 3, 00:11:49.033 "num_base_bdevs_discovered": 1, 00:11:49.033 "num_base_bdevs_operational": 3, 00:11:49.033 "base_bdevs_list": [ 00:11:49.033 { 00:11:49.033 "name": "BaseBdev1", 00:11:49.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.033 "is_configured": false, 00:11:49.033 "data_offset": 0, 00:11:49.033 "data_size": 0 00:11:49.033 }, 00:11:49.033 { 00:11:49.033 "name": null, 00:11:49.033 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:49.033 "is_configured": false, 00:11:49.033 "data_offset": 0, 00:11:49.033 "data_size": 65536 00:11:49.033 }, 00:11:49.033 { 00:11:49.033 "name": "BaseBdev3", 00:11:49.033 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:49.033 "is_configured": true, 00:11:49.033 "data_offset": 0, 00:11:49.033 "data_size": 65536 00:11:49.033 } 00:11:49.033 ] 00:11:49.033 }' 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.033 08:45:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.599 [2024-11-20 08:45:20.352451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.599 BaseBdev1 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.599 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.600 [ 00:11:49.600 { 00:11:49.600 "name": "BaseBdev1", 00:11:49.600 "aliases": [ 00:11:49.600 "0c8ba729-2aed-4161-9e03-0e7bc2e5912d" 00:11:49.600 ], 00:11:49.600 "product_name": "Malloc disk", 00:11:49.600 "block_size": 512, 00:11:49.600 "num_blocks": 65536, 00:11:49.600 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:49.600 "assigned_rate_limits": { 00:11:49.600 "rw_ios_per_sec": 0, 00:11:49.600 "rw_mbytes_per_sec": 0, 00:11:49.600 "r_mbytes_per_sec": 0, 00:11:49.600 "w_mbytes_per_sec": 0 00:11:49.600 }, 00:11:49.600 "claimed": true, 00:11:49.600 "claim_type": "exclusive_write", 00:11:49.600 "zoned": false, 00:11:49.600 "supported_io_types": { 00:11:49.600 "read": true, 00:11:49.600 "write": true, 00:11:49.600 "unmap": true, 00:11:49.600 "flush": true, 00:11:49.600 "reset": true, 00:11:49.600 "nvme_admin": false, 00:11:49.600 "nvme_io": false, 00:11:49.600 "nvme_io_md": false, 00:11:49.600 "write_zeroes": true, 00:11:49.600 "zcopy": true, 00:11:49.600 "get_zone_info": false, 00:11:49.600 "zone_management": false, 00:11:49.600 "zone_append": false, 00:11:49.600 "compare": false, 00:11:49.600 "compare_and_write": false, 00:11:49.600 "abort": true, 00:11:49.600 "seek_hole": false, 00:11:49.600 "seek_data": false, 00:11:49.600 "copy": true, 00:11:49.600 "nvme_iov_md": false 00:11:49.600 }, 00:11:49.600 "memory_domains": [ 00:11:49.600 { 00:11:49.600 "dma_device_id": "system", 00:11:49.600 "dma_device_type": 1 00:11:49.600 }, 00:11:49.600 { 00:11:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.600 "dma_device_type": 2 00:11:49.600 } 00:11:49.600 ], 00:11:49.600 "driver_specific": {} 00:11:49.600 } 00:11:49.600 ] 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.600 "name": "Existed_Raid", 00:11:49.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.600 "strip_size_kb": 0, 00:11:49.600 "state": "configuring", 00:11:49.600 "raid_level": "raid1", 00:11:49.600 "superblock": false, 00:11:49.600 "num_base_bdevs": 3, 00:11:49.600 "num_base_bdevs_discovered": 2, 00:11:49.600 "num_base_bdevs_operational": 3, 00:11:49.600 "base_bdevs_list": [ 00:11:49.600 { 00:11:49.600 "name": "BaseBdev1", 00:11:49.600 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:49.600 "is_configured": true, 00:11:49.600 "data_offset": 0, 00:11:49.600 "data_size": 65536 00:11:49.600 }, 00:11:49.600 { 00:11:49.600 "name": null, 00:11:49.600 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:49.600 "is_configured": false, 00:11:49.600 "data_offset": 0, 00:11:49.600 "data_size": 65536 00:11:49.600 }, 00:11:49.600 { 00:11:49.600 "name": "BaseBdev3", 00:11:49.600 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:49.600 "is_configured": true, 00:11:49.600 "data_offset": 0, 00:11:49.600 "data_size": 65536 00:11:49.600 } 00:11:49.600 ] 00:11:49.600 }' 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.600 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.167 [2024-11-20 08:45:20.944664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.167 08:45:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.167 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.167 "name": "Existed_Raid", 00:11:50.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.167 "strip_size_kb": 0, 00:11:50.167 "state": "configuring", 00:11:50.167 "raid_level": "raid1", 00:11:50.167 "superblock": false, 00:11:50.167 "num_base_bdevs": 3, 00:11:50.167 "num_base_bdevs_discovered": 1, 00:11:50.167 "num_base_bdevs_operational": 3, 00:11:50.167 "base_bdevs_list": [ 00:11:50.167 { 00:11:50.167 "name": "BaseBdev1", 00:11:50.167 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:50.167 "is_configured": true, 00:11:50.167 "data_offset": 0, 00:11:50.167 "data_size": 65536 00:11:50.167 }, 00:11:50.167 { 00:11:50.167 "name": null, 00:11:50.167 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:50.167 "is_configured": false, 00:11:50.167 "data_offset": 0, 00:11:50.167 "data_size": 65536 00:11:50.167 }, 00:11:50.167 { 00:11:50.167 "name": null, 00:11:50.167 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:50.167 "is_configured": false, 00:11:50.167 "data_offset": 0, 00:11:50.167 "data_size": 65536 00:11:50.167 } 00:11:50.167 ] 00:11:50.167 }' 00:11:50.167 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.167 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.736 [2024-11-20 08:45:21.500877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.736 "name": "Existed_Raid", 00:11:50.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.736 "strip_size_kb": 0, 00:11:50.736 "state": "configuring", 00:11:50.736 "raid_level": "raid1", 00:11:50.736 "superblock": false, 00:11:50.736 "num_base_bdevs": 3, 00:11:50.736 "num_base_bdevs_discovered": 2, 00:11:50.736 "num_base_bdevs_operational": 3, 00:11:50.736 "base_bdevs_list": [ 00:11:50.736 { 00:11:50.736 "name": "BaseBdev1", 00:11:50.736 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:50.736 "is_configured": true, 00:11:50.736 "data_offset": 0, 00:11:50.736 "data_size": 65536 00:11:50.736 }, 00:11:50.736 { 00:11:50.736 "name": null, 00:11:50.736 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:50.736 "is_configured": false, 00:11:50.736 "data_offset": 0, 00:11:50.736 "data_size": 65536 00:11:50.736 }, 00:11:50.736 { 00:11:50.736 "name": "BaseBdev3", 00:11:50.736 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:50.736 "is_configured": true, 00:11:50.736 "data_offset": 0, 00:11:50.736 "data_size": 65536 00:11:50.736 } 00:11:50.736 ] 00:11:50.736 }' 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.736 08:45:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.303 [2024-11-20 08:45:22.101048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.303 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.563 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.563 "name": "Existed_Raid", 00:11:51.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.563 "strip_size_kb": 0, 00:11:51.563 "state": "configuring", 00:11:51.563 "raid_level": "raid1", 00:11:51.563 "superblock": false, 00:11:51.563 "num_base_bdevs": 3, 00:11:51.563 "num_base_bdevs_discovered": 1, 00:11:51.563 "num_base_bdevs_operational": 3, 00:11:51.563 "base_bdevs_list": [ 00:11:51.563 { 00:11:51.563 "name": null, 00:11:51.563 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:51.563 "is_configured": false, 00:11:51.563 "data_offset": 0, 00:11:51.563 "data_size": 65536 00:11:51.563 }, 00:11:51.563 { 00:11:51.563 "name": null, 00:11:51.563 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:51.563 "is_configured": false, 00:11:51.563 "data_offset": 0, 00:11:51.563 "data_size": 65536 00:11:51.563 }, 00:11:51.563 { 00:11:51.563 "name": "BaseBdev3", 00:11:51.563 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:51.563 "is_configured": true, 00:11:51.563 "data_offset": 0, 00:11:51.563 "data_size": 65536 00:11:51.563 } 00:11:51.563 ] 00:11:51.563 }' 00:11:51.563 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.563 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.884 [2024-11-20 08:45:22.769260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.884 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.143 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.143 "name": "Existed_Raid", 00:11:52.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.143 "strip_size_kb": 0, 00:11:52.143 "state": "configuring", 00:11:52.143 "raid_level": "raid1", 00:11:52.143 "superblock": false, 00:11:52.143 "num_base_bdevs": 3, 00:11:52.143 "num_base_bdevs_discovered": 2, 00:11:52.143 "num_base_bdevs_operational": 3, 00:11:52.143 "base_bdevs_list": [ 00:11:52.143 { 00:11:52.143 "name": null, 00:11:52.143 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:52.143 "is_configured": false, 00:11:52.143 "data_offset": 0, 00:11:52.143 "data_size": 65536 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "name": "BaseBdev2", 00:11:52.143 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 0, 00:11:52.143 "data_size": 65536 00:11:52.143 }, 00:11:52.143 { 00:11:52.143 "name": "BaseBdev3", 00:11:52.143 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:52.143 "is_configured": true, 00:11:52.143 "data_offset": 0, 00:11:52.143 "data_size": 65536 00:11:52.143 } 00:11:52.143 ] 00:11:52.143 }' 00:11:52.143 08:45:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.143 08:45:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.402 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c8ba729-2aed-4161-9e03-0e7bc2e5912d 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.662 [2024-11-20 08:45:23.384570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.662 [2024-11-20 08:45:23.384647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:52.662 [2024-11-20 08:45:23.384660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:52.662 [2024-11-20 08:45:23.384994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:52.662 [2024-11-20 08:45:23.385258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:52.662 [2024-11-20 08:45:23.385283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:52.662 [2024-11-20 08:45:23.385603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.662 NewBaseBdev 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.662 [ 00:11:52.662 { 00:11:52.662 "name": "NewBaseBdev", 00:11:52.662 "aliases": [ 00:11:52.662 "0c8ba729-2aed-4161-9e03-0e7bc2e5912d" 00:11:52.662 ], 00:11:52.662 "product_name": "Malloc disk", 00:11:52.662 "block_size": 512, 00:11:52.662 "num_blocks": 65536, 00:11:52.662 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:52.662 "assigned_rate_limits": { 00:11:52.662 "rw_ios_per_sec": 0, 00:11:52.662 "rw_mbytes_per_sec": 0, 00:11:52.662 "r_mbytes_per_sec": 0, 00:11:52.662 "w_mbytes_per_sec": 0 00:11:52.662 }, 00:11:52.662 "claimed": true, 00:11:52.662 "claim_type": "exclusive_write", 00:11:52.662 "zoned": false, 00:11:52.662 "supported_io_types": { 00:11:52.662 "read": true, 00:11:52.662 "write": true, 00:11:52.662 "unmap": true, 00:11:52.662 "flush": true, 00:11:52.662 "reset": true, 00:11:52.662 "nvme_admin": false, 00:11:52.662 "nvme_io": false, 00:11:52.662 "nvme_io_md": false, 00:11:52.662 "write_zeroes": true, 00:11:52.662 "zcopy": true, 00:11:52.662 "get_zone_info": false, 00:11:52.662 "zone_management": false, 00:11:52.662 "zone_append": false, 00:11:52.662 "compare": false, 00:11:52.662 "compare_and_write": false, 00:11:52.662 "abort": true, 00:11:52.662 "seek_hole": false, 00:11:52.662 "seek_data": false, 00:11:52.662 "copy": true, 00:11:52.662 "nvme_iov_md": false 00:11:52.662 }, 00:11:52.662 "memory_domains": [ 00:11:52.662 { 00:11:52.662 "dma_device_id": "system", 00:11:52.662 "dma_device_type": 1 00:11:52.662 }, 00:11:52.662 { 00:11:52.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.662 "dma_device_type": 2 00:11:52.662 } 00:11:52.662 ], 00:11:52.662 "driver_specific": {} 00:11:52.662 } 00:11:52.662 ] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.662 "name": "Existed_Raid", 00:11:52.662 "uuid": "cf89c2c8-c923-4b88-a0b6-8a4c206328e6", 00:11:52.662 "strip_size_kb": 0, 00:11:52.662 "state": "online", 00:11:52.662 "raid_level": "raid1", 00:11:52.662 "superblock": false, 00:11:52.662 "num_base_bdevs": 3, 00:11:52.662 "num_base_bdevs_discovered": 3, 00:11:52.662 "num_base_bdevs_operational": 3, 00:11:52.662 "base_bdevs_list": [ 00:11:52.662 { 00:11:52.662 "name": "NewBaseBdev", 00:11:52.662 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:52.662 "is_configured": true, 00:11:52.662 "data_offset": 0, 00:11:52.662 "data_size": 65536 00:11:52.662 }, 00:11:52.662 { 00:11:52.662 "name": "BaseBdev2", 00:11:52.662 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:52.662 "is_configured": true, 00:11:52.662 "data_offset": 0, 00:11:52.662 "data_size": 65536 00:11:52.662 }, 00:11:52.662 { 00:11:52.662 "name": "BaseBdev3", 00:11:52.662 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:52.662 "is_configured": true, 00:11:52.662 "data_offset": 0, 00:11:52.662 "data_size": 65536 00:11:52.662 } 00:11:52.662 ] 00:11:52.662 }' 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.662 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.232 [2024-11-20 08:45:23.929284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.232 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.232 "name": "Existed_Raid", 00:11:53.232 "aliases": [ 00:11:53.232 "cf89c2c8-c923-4b88-a0b6-8a4c206328e6" 00:11:53.232 ], 00:11:53.232 "product_name": "Raid Volume", 00:11:53.232 "block_size": 512, 00:11:53.232 "num_blocks": 65536, 00:11:53.232 "uuid": "cf89c2c8-c923-4b88-a0b6-8a4c206328e6", 00:11:53.232 "assigned_rate_limits": { 00:11:53.232 "rw_ios_per_sec": 0, 00:11:53.232 "rw_mbytes_per_sec": 0, 00:11:53.232 "r_mbytes_per_sec": 0, 00:11:53.232 "w_mbytes_per_sec": 0 00:11:53.232 }, 00:11:53.232 "claimed": false, 00:11:53.232 "zoned": false, 00:11:53.232 "supported_io_types": { 00:11:53.232 "read": true, 00:11:53.232 "write": true, 00:11:53.232 "unmap": false, 00:11:53.232 "flush": false, 00:11:53.232 "reset": true, 00:11:53.232 "nvme_admin": false, 00:11:53.232 "nvme_io": false, 00:11:53.232 "nvme_io_md": false, 00:11:53.232 "write_zeroes": true, 00:11:53.232 "zcopy": false, 00:11:53.232 "get_zone_info": false, 00:11:53.232 "zone_management": false, 00:11:53.232 "zone_append": false, 00:11:53.232 "compare": false, 00:11:53.232 "compare_and_write": false, 00:11:53.232 "abort": false, 00:11:53.232 "seek_hole": false, 00:11:53.232 "seek_data": false, 00:11:53.232 "copy": false, 00:11:53.232 "nvme_iov_md": false 00:11:53.232 }, 00:11:53.232 "memory_domains": [ 00:11:53.232 { 00:11:53.232 "dma_device_id": "system", 00:11:53.232 "dma_device_type": 1 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.232 "dma_device_type": 2 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "dma_device_id": "system", 00:11:53.232 "dma_device_type": 1 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.232 "dma_device_type": 2 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "dma_device_id": "system", 00:11:53.232 "dma_device_type": 1 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.232 "dma_device_type": 2 00:11:53.232 } 00:11:53.232 ], 00:11:53.232 "driver_specific": { 00:11:53.232 "raid": { 00:11:53.232 "uuid": "cf89c2c8-c923-4b88-a0b6-8a4c206328e6", 00:11:53.232 "strip_size_kb": 0, 00:11:53.232 "state": "online", 00:11:53.232 "raid_level": "raid1", 00:11:53.232 "superblock": false, 00:11:53.232 "num_base_bdevs": 3, 00:11:53.232 "num_base_bdevs_discovered": 3, 00:11:53.232 "num_base_bdevs_operational": 3, 00:11:53.232 "base_bdevs_list": [ 00:11:53.232 { 00:11:53.232 "name": "NewBaseBdev", 00:11:53.232 "uuid": "0c8ba729-2aed-4161-9e03-0e7bc2e5912d", 00:11:53.232 "is_configured": true, 00:11:53.232 "data_offset": 0, 00:11:53.232 "data_size": 65536 00:11:53.232 }, 00:11:53.232 { 00:11:53.232 "name": "BaseBdev2", 00:11:53.232 "uuid": "c3a175fb-4f42-4289-aab6-a54c2273ecbc", 00:11:53.232 "is_configured": true, 00:11:53.232 "data_offset": 0, 00:11:53.232 "data_size": 65536 00:11:53.232 }, 00:11:53.233 { 00:11:53.233 "name": "BaseBdev3", 00:11:53.233 "uuid": "d79bb0ff-2843-4ae8-a40b-d2b5e1f16693", 00:11:53.233 "is_configured": true, 00:11:53.233 "data_offset": 0, 00:11:53.233 "data_size": 65536 00:11:53.233 } 00:11:53.233 ] 00:11:53.233 } 00:11:53.233 } 00:11:53.233 }' 00:11:53.233 08:45:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:53.233 BaseBdev2 00:11:53.233 BaseBdev3' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.233 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.492 [2024-11-20 08:45:24.281010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.492 [2024-11-20 08:45:24.281050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.492 [2024-11-20 08:45:24.281143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.492 [2024-11-20 08:45:24.281719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.492 [2024-11-20 08:45:24.281859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67431 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67431 ']' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67431 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67431 00:11:53.492 killing process with pid 67431 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67431' 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67431 00:11:53.492 [2024-11-20 08:45:24.318957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.492 08:45:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67431 00:11:53.752 [2024-11-20 08:45:24.594794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.130 08:45:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:55.130 00:11:55.130 real 0m11.929s 00:11:55.131 user 0m19.779s 00:11:55.131 sys 0m1.697s 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.131 ************************************ 00:11:55.131 END TEST raid_state_function_test 00:11:55.131 ************************************ 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 08:45:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:55.131 08:45:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.131 08:45:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.131 08:45:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 ************************************ 00:11:55.131 START TEST raid_state_function_test_sb 00:11:55.131 ************************************ 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:55.131 Process raid pid: 68069 00:11:55.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68069 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68069' 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68069 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68069 ']' 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.131 08:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.131 [2024-11-20 08:45:25.771253] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:11:55.131 [2024-11-20 08:45:25.771684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.131 [2024-11-20 08:45:25.947395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.390 [2024-11-20 08:45:26.080864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.390 [2024-11-20 08:45:26.291113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.390 [2024-11-20 08:45:26.291171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.008 [2024-11-20 08:45:26.785116] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.008 [2024-11-20 08:45:26.785253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.008 [2024-11-20 08:45:26.785272] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.008 [2024-11-20 08:45:26.785289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.008 [2024-11-20 08:45:26.785299] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.008 [2024-11-20 08:45:26.785314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.008 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.009 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.009 "name": "Existed_Raid", 00:11:56.009 "uuid": "c344169a-4977-4e50-b419-51786e6200a8", 00:11:56.009 "strip_size_kb": 0, 00:11:56.009 "state": "configuring", 00:11:56.009 "raid_level": "raid1", 00:11:56.009 "superblock": true, 00:11:56.009 "num_base_bdevs": 3, 00:11:56.009 "num_base_bdevs_discovered": 0, 00:11:56.009 "num_base_bdevs_operational": 3, 00:11:56.009 "base_bdevs_list": [ 00:11:56.009 { 00:11:56.009 "name": "BaseBdev1", 00:11:56.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.009 "is_configured": false, 00:11:56.009 "data_offset": 0, 00:11:56.009 "data_size": 0 00:11:56.009 }, 00:11:56.009 { 00:11:56.009 "name": "BaseBdev2", 00:11:56.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.009 "is_configured": false, 00:11:56.009 "data_offset": 0, 00:11:56.009 "data_size": 0 00:11:56.009 }, 00:11:56.009 { 00:11:56.009 "name": "BaseBdev3", 00:11:56.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.009 "is_configured": false, 00:11:56.009 "data_offset": 0, 00:11:56.009 "data_size": 0 00:11:56.009 } 00:11:56.009 ] 00:11:56.009 }' 00:11:56.009 08:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.009 08:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.575 [2024-11-20 08:45:27.317306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.575 [2024-11-20 08:45:27.317734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.575 [2024-11-20 08:45:27.325210] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.575 [2024-11-20 08:45:27.325280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.575 [2024-11-20 08:45:27.325296] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.575 [2024-11-20 08:45:27.325313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.575 [2024-11-20 08:45:27.325322] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.575 [2024-11-20 08:45:27.325337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.575 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 [2024-11-20 08:45:27.374105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.576 BaseBdev1 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 [ 00:11:56.576 { 00:11:56.576 "name": "BaseBdev1", 00:11:56.576 "aliases": [ 00:11:56.576 "c46ed728-0308-48bd-8b56-7fabb2ae3bed" 00:11:56.576 ], 00:11:56.576 "product_name": "Malloc disk", 00:11:56.576 "block_size": 512, 00:11:56.576 "num_blocks": 65536, 00:11:56.576 "uuid": "c46ed728-0308-48bd-8b56-7fabb2ae3bed", 00:11:56.576 "assigned_rate_limits": { 00:11:56.576 "rw_ios_per_sec": 0, 00:11:56.576 "rw_mbytes_per_sec": 0, 00:11:56.576 "r_mbytes_per_sec": 0, 00:11:56.576 "w_mbytes_per_sec": 0 00:11:56.576 }, 00:11:56.576 "claimed": true, 00:11:56.576 "claim_type": "exclusive_write", 00:11:56.576 "zoned": false, 00:11:56.576 "supported_io_types": { 00:11:56.576 "read": true, 00:11:56.576 "write": true, 00:11:56.576 "unmap": true, 00:11:56.576 "flush": true, 00:11:56.576 "reset": true, 00:11:56.576 "nvme_admin": false, 00:11:56.576 "nvme_io": false, 00:11:56.576 "nvme_io_md": false, 00:11:56.576 "write_zeroes": true, 00:11:56.576 "zcopy": true, 00:11:56.576 "get_zone_info": false, 00:11:56.576 "zone_management": false, 00:11:56.576 "zone_append": false, 00:11:56.576 "compare": false, 00:11:56.576 "compare_and_write": false, 00:11:56.576 "abort": true, 00:11:56.576 "seek_hole": false, 00:11:56.576 "seek_data": false, 00:11:56.576 "copy": true, 00:11:56.576 "nvme_iov_md": false 00:11:56.576 }, 00:11:56.576 "memory_domains": [ 00:11:56.576 { 00:11:56.576 "dma_device_id": "system", 00:11:56.576 "dma_device_type": 1 00:11:56.576 }, 00:11:56.576 { 00:11:56.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.576 "dma_device_type": 2 00:11:56.576 } 00:11:56.576 ], 00:11:56.576 "driver_specific": {} 00:11:56.576 } 00:11:56.576 ] 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.576 "name": "Existed_Raid", 00:11:56.576 "uuid": "2e9a8123-bc17-410b-8f50-b103497f262f", 00:11:56.576 "strip_size_kb": 0, 00:11:56.576 "state": "configuring", 00:11:56.576 "raid_level": "raid1", 00:11:56.576 "superblock": true, 00:11:56.576 "num_base_bdevs": 3, 00:11:56.576 "num_base_bdevs_discovered": 1, 00:11:56.576 "num_base_bdevs_operational": 3, 00:11:56.576 "base_bdevs_list": [ 00:11:56.576 { 00:11:56.576 "name": "BaseBdev1", 00:11:56.576 "uuid": "c46ed728-0308-48bd-8b56-7fabb2ae3bed", 00:11:56.576 "is_configured": true, 00:11:56.576 "data_offset": 2048, 00:11:56.576 "data_size": 63488 00:11:56.576 }, 00:11:56.576 { 00:11:56.576 "name": "BaseBdev2", 00:11:56.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.576 "is_configured": false, 00:11:56.576 "data_offset": 0, 00:11:56.576 "data_size": 0 00:11:56.576 }, 00:11:56.576 { 00:11:56.576 "name": "BaseBdev3", 00:11:56.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.576 "is_configured": false, 00:11:56.576 "data_offset": 0, 00:11:56.576 "data_size": 0 00:11:56.576 } 00:11:56.576 ] 00:11:56.576 }' 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.576 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 [2024-11-20 08:45:27.910306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.145 [2024-11-20 08:45:27.910378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 [2024-11-20 08:45:27.922370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.145 [2024-11-20 08:45:27.924951] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.145 [2024-11-20 08:45:27.925012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.145 [2024-11-20 08:45:27.925031] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.145 [2024-11-20 08:45:27.925047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.145 "name": "Existed_Raid", 00:11:57.145 "uuid": "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b", 00:11:57.145 "strip_size_kb": 0, 00:11:57.145 "state": "configuring", 00:11:57.145 "raid_level": "raid1", 00:11:57.145 "superblock": true, 00:11:57.145 "num_base_bdevs": 3, 00:11:57.145 "num_base_bdevs_discovered": 1, 00:11:57.145 "num_base_bdevs_operational": 3, 00:11:57.145 "base_bdevs_list": [ 00:11:57.145 { 00:11:57.145 "name": "BaseBdev1", 00:11:57.145 "uuid": "c46ed728-0308-48bd-8b56-7fabb2ae3bed", 00:11:57.145 "is_configured": true, 00:11:57.145 "data_offset": 2048, 00:11:57.145 "data_size": 63488 00:11:57.145 }, 00:11:57.145 { 00:11:57.145 "name": "BaseBdev2", 00:11:57.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.145 "is_configured": false, 00:11:57.145 "data_offset": 0, 00:11:57.145 "data_size": 0 00:11:57.145 }, 00:11:57.145 { 00:11:57.145 "name": "BaseBdev3", 00:11:57.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.145 "is_configured": false, 00:11:57.145 "data_offset": 0, 00:11:57.145 "data_size": 0 00:11:57.145 } 00:11:57.145 ] 00:11:57.145 }' 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.145 08:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.713 [2024-11-20 08:45:28.498602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.713 BaseBdev2 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.713 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.713 [ 00:11:57.713 { 00:11:57.713 "name": "BaseBdev2", 00:11:57.713 "aliases": [ 00:11:57.713 "d33975d8-0408-4cc9-a1ee-35d33327165d" 00:11:57.713 ], 00:11:57.713 "product_name": "Malloc disk", 00:11:57.713 "block_size": 512, 00:11:57.713 "num_blocks": 65536, 00:11:57.713 "uuid": "d33975d8-0408-4cc9-a1ee-35d33327165d", 00:11:57.713 "assigned_rate_limits": { 00:11:57.713 "rw_ios_per_sec": 0, 00:11:57.713 "rw_mbytes_per_sec": 0, 00:11:57.713 "r_mbytes_per_sec": 0, 00:11:57.713 "w_mbytes_per_sec": 0 00:11:57.713 }, 00:11:57.713 "claimed": true, 00:11:57.713 "claim_type": "exclusive_write", 00:11:57.713 "zoned": false, 00:11:57.713 "supported_io_types": { 00:11:57.713 "read": true, 00:11:57.713 "write": true, 00:11:57.713 "unmap": true, 00:11:57.713 "flush": true, 00:11:57.713 "reset": true, 00:11:57.713 "nvme_admin": false, 00:11:57.713 "nvme_io": false, 00:11:57.713 "nvme_io_md": false, 00:11:57.713 "write_zeroes": true, 00:11:57.713 "zcopy": true, 00:11:57.713 "get_zone_info": false, 00:11:57.713 "zone_management": false, 00:11:57.713 "zone_append": false, 00:11:57.713 "compare": false, 00:11:57.713 "compare_and_write": false, 00:11:57.713 "abort": true, 00:11:57.713 "seek_hole": false, 00:11:57.713 "seek_data": false, 00:11:57.713 "copy": true, 00:11:57.713 "nvme_iov_md": false 00:11:57.713 }, 00:11:57.713 "memory_domains": [ 00:11:57.713 { 00:11:57.713 "dma_device_id": "system", 00:11:57.713 "dma_device_type": 1 00:11:57.713 }, 00:11:57.713 { 00:11:57.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.714 "dma_device_type": 2 00:11:57.714 } 00:11:57.714 ], 00:11:57.714 "driver_specific": {} 00:11:57.714 } 00:11:57.714 ] 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.714 "name": "Existed_Raid", 00:11:57.714 "uuid": "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b", 00:11:57.714 "strip_size_kb": 0, 00:11:57.714 "state": "configuring", 00:11:57.714 "raid_level": "raid1", 00:11:57.714 "superblock": true, 00:11:57.714 "num_base_bdevs": 3, 00:11:57.714 "num_base_bdevs_discovered": 2, 00:11:57.714 "num_base_bdevs_operational": 3, 00:11:57.714 "base_bdevs_list": [ 00:11:57.714 { 00:11:57.714 "name": "BaseBdev1", 00:11:57.714 "uuid": "c46ed728-0308-48bd-8b56-7fabb2ae3bed", 00:11:57.714 "is_configured": true, 00:11:57.714 "data_offset": 2048, 00:11:57.714 "data_size": 63488 00:11:57.714 }, 00:11:57.714 { 00:11:57.714 "name": "BaseBdev2", 00:11:57.714 "uuid": "d33975d8-0408-4cc9-a1ee-35d33327165d", 00:11:57.714 "is_configured": true, 00:11:57.714 "data_offset": 2048, 00:11:57.714 "data_size": 63488 00:11:57.714 }, 00:11:57.714 { 00:11:57.714 "name": "BaseBdev3", 00:11:57.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.714 "is_configured": false, 00:11:57.714 "data_offset": 0, 00:11:57.714 "data_size": 0 00:11:57.714 } 00:11:57.714 ] 00:11:57.714 }' 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.714 08:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.281 [2024-11-20 08:45:29.112066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.281 BaseBdev3 00:11:58.281 [2024-11-20 08:45:29.112497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.281 [2024-11-20 08:45:29.112531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.281 [2024-11-20 08:45:29.112882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:58.281 [2024-11-20 08:45:29.113092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.281 [2024-11-20 08:45:29.113109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.281 [2024-11-20 08:45:29.113311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.281 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.281 [ 00:11:58.281 { 00:11:58.281 "name": "BaseBdev3", 00:11:58.281 "aliases": [ 00:11:58.282 "cc1a8a02-5904-4f6b-8242-64bcc5483d19" 00:11:58.282 ], 00:11:58.282 "product_name": "Malloc disk", 00:11:58.282 "block_size": 512, 00:11:58.282 "num_blocks": 65536, 00:11:58.282 "uuid": "cc1a8a02-5904-4f6b-8242-64bcc5483d19", 00:11:58.282 "assigned_rate_limits": { 00:11:58.282 "rw_ios_per_sec": 0, 00:11:58.282 "rw_mbytes_per_sec": 0, 00:11:58.282 "r_mbytes_per_sec": 0, 00:11:58.282 "w_mbytes_per_sec": 0 00:11:58.282 }, 00:11:58.282 "claimed": true, 00:11:58.282 "claim_type": "exclusive_write", 00:11:58.282 "zoned": false, 00:11:58.282 "supported_io_types": { 00:11:58.282 "read": true, 00:11:58.282 "write": true, 00:11:58.282 "unmap": true, 00:11:58.282 "flush": true, 00:11:58.282 "reset": true, 00:11:58.282 "nvme_admin": false, 00:11:58.282 "nvme_io": false, 00:11:58.282 "nvme_io_md": false, 00:11:58.282 "write_zeroes": true, 00:11:58.282 "zcopy": true, 00:11:58.282 "get_zone_info": false, 00:11:58.282 "zone_management": false, 00:11:58.282 "zone_append": false, 00:11:58.282 "compare": false, 00:11:58.282 "compare_and_write": false, 00:11:58.282 "abort": true, 00:11:58.282 "seek_hole": false, 00:11:58.282 "seek_data": false, 00:11:58.282 "copy": true, 00:11:58.282 "nvme_iov_md": false 00:11:58.282 }, 00:11:58.282 "memory_domains": [ 00:11:58.282 { 00:11:58.282 "dma_device_id": "system", 00:11:58.282 "dma_device_type": 1 00:11:58.282 }, 00:11:58.282 { 00:11:58.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.282 "dma_device_type": 2 00:11:58.282 } 00:11:58.282 ], 00:11:58.282 "driver_specific": {} 00:11:58.282 } 00:11:58.282 ] 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.282 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.540 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.540 "name": "Existed_Raid", 00:11:58.540 "uuid": "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b", 00:11:58.540 "strip_size_kb": 0, 00:11:58.540 "state": "online", 00:11:58.540 "raid_level": "raid1", 00:11:58.540 "superblock": true, 00:11:58.540 "num_base_bdevs": 3, 00:11:58.540 "num_base_bdevs_discovered": 3, 00:11:58.540 "num_base_bdevs_operational": 3, 00:11:58.540 "base_bdevs_list": [ 00:11:58.540 { 00:11:58.540 "name": "BaseBdev1", 00:11:58.540 "uuid": "c46ed728-0308-48bd-8b56-7fabb2ae3bed", 00:11:58.540 "is_configured": true, 00:11:58.540 "data_offset": 2048, 00:11:58.540 "data_size": 63488 00:11:58.540 }, 00:11:58.540 { 00:11:58.540 "name": "BaseBdev2", 00:11:58.540 "uuid": "d33975d8-0408-4cc9-a1ee-35d33327165d", 00:11:58.540 "is_configured": true, 00:11:58.540 "data_offset": 2048, 00:11:58.540 "data_size": 63488 00:11:58.540 }, 00:11:58.540 { 00:11:58.540 "name": "BaseBdev3", 00:11:58.540 "uuid": "cc1a8a02-5904-4f6b-8242-64bcc5483d19", 00:11:58.540 "is_configured": true, 00:11:58.540 "data_offset": 2048, 00:11:58.540 "data_size": 63488 00:11:58.540 } 00:11:58.540 ] 00:11:58.540 }' 00:11:58.540 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.540 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.799 [2024-11-20 08:45:29.684686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.799 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.059 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.060 "name": "Existed_Raid", 00:11:59.060 "aliases": [ 00:11:59.060 "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b" 00:11:59.060 ], 00:11:59.060 "product_name": "Raid Volume", 00:11:59.060 "block_size": 512, 00:11:59.060 "num_blocks": 63488, 00:11:59.060 "uuid": "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b", 00:11:59.060 "assigned_rate_limits": { 00:11:59.060 "rw_ios_per_sec": 0, 00:11:59.060 "rw_mbytes_per_sec": 0, 00:11:59.060 "r_mbytes_per_sec": 0, 00:11:59.060 "w_mbytes_per_sec": 0 00:11:59.060 }, 00:11:59.060 "claimed": false, 00:11:59.060 "zoned": false, 00:11:59.060 "supported_io_types": { 00:11:59.060 "read": true, 00:11:59.060 "write": true, 00:11:59.060 "unmap": false, 00:11:59.060 "flush": false, 00:11:59.060 "reset": true, 00:11:59.060 "nvme_admin": false, 00:11:59.060 "nvme_io": false, 00:11:59.060 "nvme_io_md": false, 00:11:59.060 "write_zeroes": true, 00:11:59.060 "zcopy": false, 00:11:59.060 "get_zone_info": false, 00:11:59.060 "zone_management": false, 00:11:59.060 "zone_append": false, 00:11:59.060 "compare": false, 00:11:59.060 "compare_and_write": false, 00:11:59.060 "abort": false, 00:11:59.060 "seek_hole": false, 00:11:59.060 "seek_data": false, 00:11:59.060 "copy": false, 00:11:59.060 "nvme_iov_md": false 00:11:59.060 }, 00:11:59.060 "memory_domains": [ 00:11:59.060 { 00:11:59.060 "dma_device_id": "system", 00:11:59.060 "dma_device_type": 1 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.060 "dma_device_type": 2 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "dma_device_id": "system", 00:11:59.060 "dma_device_type": 1 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.060 "dma_device_type": 2 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "dma_device_id": "system", 00:11:59.060 "dma_device_type": 1 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.060 "dma_device_type": 2 00:11:59.060 } 00:11:59.060 ], 00:11:59.060 "driver_specific": { 00:11:59.060 "raid": { 00:11:59.060 "uuid": "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b", 00:11:59.060 "strip_size_kb": 0, 00:11:59.060 "state": "online", 00:11:59.060 "raid_level": "raid1", 00:11:59.060 "superblock": true, 00:11:59.060 "num_base_bdevs": 3, 00:11:59.060 "num_base_bdevs_discovered": 3, 00:11:59.060 "num_base_bdevs_operational": 3, 00:11:59.060 "base_bdevs_list": [ 00:11:59.060 { 00:11:59.060 "name": "BaseBdev1", 00:11:59.060 "uuid": "c46ed728-0308-48bd-8b56-7fabb2ae3bed", 00:11:59.060 "is_configured": true, 00:11:59.060 "data_offset": 2048, 00:11:59.060 "data_size": 63488 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "name": "BaseBdev2", 00:11:59.060 "uuid": "d33975d8-0408-4cc9-a1ee-35d33327165d", 00:11:59.060 "is_configured": true, 00:11:59.060 "data_offset": 2048, 00:11:59.060 "data_size": 63488 00:11:59.060 }, 00:11:59.060 { 00:11:59.060 "name": "BaseBdev3", 00:11:59.060 "uuid": "cc1a8a02-5904-4f6b-8242-64bcc5483d19", 00:11:59.060 "is_configured": true, 00:11:59.060 "data_offset": 2048, 00:11:59.060 "data_size": 63488 00:11:59.060 } 00:11:59.060 ] 00:11:59.060 } 00:11:59.060 } 00:11:59.060 }' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:59.060 BaseBdev2 00:11:59.060 BaseBdev3' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.060 08:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.320 [2024-11-20 08:45:30.012570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.320 "name": "Existed_Raid", 00:11:59.320 "uuid": "f7673f9b-b6fc-40a7-a047-4ab4a19c2f7b", 00:11:59.320 "strip_size_kb": 0, 00:11:59.320 "state": "online", 00:11:59.320 "raid_level": "raid1", 00:11:59.320 "superblock": true, 00:11:59.320 "num_base_bdevs": 3, 00:11:59.320 "num_base_bdevs_discovered": 2, 00:11:59.320 "num_base_bdevs_operational": 2, 00:11:59.320 "base_bdevs_list": [ 00:11:59.320 { 00:11:59.320 "name": null, 00:11:59.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.320 "is_configured": false, 00:11:59.320 "data_offset": 0, 00:11:59.320 "data_size": 63488 00:11:59.320 }, 00:11:59.320 { 00:11:59.320 "name": "BaseBdev2", 00:11:59.320 "uuid": "d33975d8-0408-4cc9-a1ee-35d33327165d", 00:11:59.320 "is_configured": true, 00:11:59.320 "data_offset": 2048, 00:11:59.320 "data_size": 63488 00:11:59.320 }, 00:11:59.320 { 00:11:59.320 "name": "BaseBdev3", 00:11:59.320 "uuid": "cc1a8a02-5904-4f6b-8242-64bcc5483d19", 00:11:59.320 "is_configured": true, 00:11:59.320 "data_offset": 2048, 00:11:59.320 "data_size": 63488 00:11:59.320 } 00:11:59.320 ] 00:11:59.320 }' 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.320 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.962 [2024-11-20 08:45:30.690625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.962 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.963 [2024-11-20 08:45:30.837318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.963 [2024-11-20 08:45:30.837448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.221 [2024-11-20 08:45:30.925826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.221 [2024-11-20 08:45:30.926220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.221 [2024-11-20 08:45:30.926257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.221 08:45:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.221 BaseBdev2 00:12:00.221 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.221 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:00.221 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.222 [ 00:12:00.222 { 00:12:00.222 "name": "BaseBdev2", 00:12:00.222 "aliases": [ 00:12:00.222 "9b33b0b0-2b75-470e-9c9f-243f56c2e893" 00:12:00.222 ], 00:12:00.222 "product_name": "Malloc disk", 00:12:00.222 "block_size": 512, 00:12:00.222 "num_blocks": 65536, 00:12:00.222 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:00.222 "assigned_rate_limits": { 00:12:00.222 "rw_ios_per_sec": 0, 00:12:00.222 "rw_mbytes_per_sec": 0, 00:12:00.222 "r_mbytes_per_sec": 0, 00:12:00.222 "w_mbytes_per_sec": 0 00:12:00.222 }, 00:12:00.222 "claimed": false, 00:12:00.222 "zoned": false, 00:12:00.222 "supported_io_types": { 00:12:00.222 "read": true, 00:12:00.222 "write": true, 00:12:00.222 "unmap": true, 00:12:00.222 "flush": true, 00:12:00.222 "reset": true, 00:12:00.222 "nvme_admin": false, 00:12:00.222 "nvme_io": false, 00:12:00.222 "nvme_io_md": false, 00:12:00.222 "write_zeroes": true, 00:12:00.222 "zcopy": true, 00:12:00.222 "get_zone_info": false, 00:12:00.222 "zone_management": false, 00:12:00.222 "zone_append": false, 00:12:00.222 "compare": false, 00:12:00.222 "compare_and_write": false, 00:12:00.222 "abort": true, 00:12:00.222 "seek_hole": false, 00:12:00.222 "seek_data": false, 00:12:00.222 "copy": true, 00:12:00.222 "nvme_iov_md": false 00:12:00.222 }, 00:12:00.222 "memory_domains": [ 00:12:00.222 { 00:12:00.222 "dma_device_id": "system", 00:12:00.222 "dma_device_type": 1 00:12:00.222 }, 00:12:00.222 { 00:12:00.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.222 "dma_device_type": 2 00:12:00.222 } 00:12:00.222 ], 00:12:00.222 "driver_specific": {} 00:12:00.222 } 00:12:00.222 ] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.222 BaseBdev3 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.222 [ 00:12:00.222 { 00:12:00.222 "name": "BaseBdev3", 00:12:00.222 "aliases": [ 00:12:00.222 "da73ce0e-ade5-4aaa-aa9c-553672b356dc" 00:12:00.222 ], 00:12:00.222 "product_name": "Malloc disk", 00:12:00.222 "block_size": 512, 00:12:00.222 "num_blocks": 65536, 00:12:00.222 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:00.222 "assigned_rate_limits": { 00:12:00.222 "rw_ios_per_sec": 0, 00:12:00.222 "rw_mbytes_per_sec": 0, 00:12:00.222 "r_mbytes_per_sec": 0, 00:12:00.222 "w_mbytes_per_sec": 0 00:12:00.222 }, 00:12:00.222 "claimed": false, 00:12:00.222 "zoned": false, 00:12:00.222 "supported_io_types": { 00:12:00.222 "read": true, 00:12:00.222 "write": true, 00:12:00.222 "unmap": true, 00:12:00.222 "flush": true, 00:12:00.222 "reset": true, 00:12:00.222 "nvme_admin": false, 00:12:00.222 "nvme_io": false, 00:12:00.222 "nvme_io_md": false, 00:12:00.222 "write_zeroes": true, 00:12:00.222 "zcopy": true, 00:12:00.222 "get_zone_info": false, 00:12:00.222 "zone_management": false, 00:12:00.222 "zone_append": false, 00:12:00.222 "compare": false, 00:12:00.222 "compare_and_write": false, 00:12:00.222 "abort": true, 00:12:00.222 "seek_hole": false, 00:12:00.222 "seek_data": false, 00:12:00.222 "copy": true, 00:12:00.222 "nvme_iov_md": false 00:12:00.222 }, 00:12:00.222 "memory_domains": [ 00:12:00.222 { 00:12:00.222 "dma_device_id": "system", 00:12:00.222 "dma_device_type": 1 00:12:00.222 }, 00:12:00.222 { 00:12:00.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.222 "dma_device_type": 2 00:12:00.222 } 00:12:00.222 ], 00:12:00.222 "driver_specific": {} 00:12:00.222 } 00:12:00.222 ] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.222 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.222 [2024-11-20 08:45:31.132334] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.222 [2024-11-20 08:45:31.132556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.222 [2024-11-20 08:45:31.132751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.481 [2024-11-20 08:45:31.135478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.481 "name": "Existed_Raid", 00:12:00.481 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:00.481 "strip_size_kb": 0, 00:12:00.481 "state": "configuring", 00:12:00.481 "raid_level": "raid1", 00:12:00.481 "superblock": true, 00:12:00.481 "num_base_bdevs": 3, 00:12:00.481 "num_base_bdevs_discovered": 2, 00:12:00.481 "num_base_bdevs_operational": 3, 00:12:00.481 "base_bdevs_list": [ 00:12:00.481 { 00:12:00.481 "name": "BaseBdev1", 00:12:00.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.481 "is_configured": false, 00:12:00.481 "data_offset": 0, 00:12:00.481 "data_size": 0 00:12:00.481 }, 00:12:00.481 { 00:12:00.481 "name": "BaseBdev2", 00:12:00.481 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:00.481 "is_configured": true, 00:12:00.481 "data_offset": 2048, 00:12:00.481 "data_size": 63488 00:12:00.481 }, 00:12:00.481 { 00:12:00.481 "name": "BaseBdev3", 00:12:00.481 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:00.481 "is_configured": true, 00:12:00.481 "data_offset": 2048, 00:12:00.481 "data_size": 63488 00:12:00.481 } 00:12:00.481 ] 00:12:00.481 }' 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.481 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.741 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:00.741 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.741 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.000 [2024-11-20 08:45:31.656481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.000 "name": "Existed_Raid", 00:12:01.000 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:01.000 "strip_size_kb": 0, 00:12:01.000 "state": "configuring", 00:12:01.000 "raid_level": "raid1", 00:12:01.000 "superblock": true, 00:12:01.000 "num_base_bdevs": 3, 00:12:01.000 "num_base_bdevs_discovered": 1, 00:12:01.000 "num_base_bdevs_operational": 3, 00:12:01.000 "base_bdevs_list": [ 00:12:01.000 { 00:12:01.000 "name": "BaseBdev1", 00:12:01.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.000 "is_configured": false, 00:12:01.000 "data_offset": 0, 00:12:01.000 "data_size": 0 00:12:01.000 }, 00:12:01.000 { 00:12:01.000 "name": null, 00:12:01.000 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:01.000 "is_configured": false, 00:12:01.000 "data_offset": 0, 00:12:01.000 "data_size": 63488 00:12:01.000 }, 00:12:01.000 { 00:12:01.000 "name": "BaseBdev3", 00:12:01.000 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:01.000 "is_configured": true, 00:12:01.000 "data_offset": 2048, 00:12:01.000 "data_size": 63488 00:12:01.000 } 00:12:01.000 ] 00:12:01.000 }' 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.000 08:45:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.259 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.259 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.259 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.259 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.259 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.519 [2024-11-20 08:45:32.238262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.519 BaseBdev1 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.519 [ 00:12:01.519 { 00:12:01.519 "name": "BaseBdev1", 00:12:01.519 "aliases": [ 00:12:01.519 "6ef3e78d-0ba1-4585-b33a-813c3e8ac758" 00:12:01.519 ], 00:12:01.519 "product_name": "Malloc disk", 00:12:01.519 "block_size": 512, 00:12:01.519 "num_blocks": 65536, 00:12:01.519 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:01.519 "assigned_rate_limits": { 00:12:01.519 "rw_ios_per_sec": 0, 00:12:01.519 "rw_mbytes_per_sec": 0, 00:12:01.519 "r_mbytes_per_sec": 0, 00:12:01.519 "w_mbytes_per_sec": 0 00:12:01.519 }, 00:12:01.519 "claimed": true, 00:12:01.519 "claim_type": "exclusive_write", 00:12:01.519 "zoned": false, 00:12:01.519 "supported_io_types": { 00:12:01.519 "read": true, 00:12:01.519 "write": true, 00:12:01.519 "unmap": true, 00:12:01.519 "flush": true, 00:12:01.519 "reset": true, 00:12:01.519 "nvme_admin": false, 00:12:01.519 "nvme_io": false, 00:12:01.519 "nvme_io_md": false, 00:12:01.519 "write_zeroes": true, 00:12:01.519 "zcopy": true, 00:12:01.519 "get_zone_info": false, 00:12:01.519 "zone_management": false, 00:12:01.519 "zone_append": false, 00:12:01.519 "compare": false, 00:12:01.519 "compare_and_write": false, 00:12:01.519 "abort": true, 00:12:01.519 "seek_hole": false, 00:12:01.519 "seek_data": false, 00:12:01.519 "copy": true, 00:12:01.519 "nvme_iov_md": false 00:12:01.519 }, 00:12:01.519 "memory_domains": [ 00:12:01.519 { 00:12:01.519 "dma_device_id": "system", 00:12:01.519 "dma_device_type": 1 00:12:01.519 }, 00:12:01.519 { 00:12:01.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.519 "dma_device_type": 2 00:12:01.519 } 00:12:01.519 ], 00:12:01.519 "driver_specific": {} 00:12:01.519 } 00:12:01.519 ] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.519 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.520 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.520 "name": "Existed_Raid", 00:12:01.520 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:01.520 "strip_size_kb": 0, 00:12:01.520 "state": "configuring", 00:12:01.520 "raid_level": "raid1", 00:12:01.520 "superblock": true, 00:12:01.520 "num_base_bdevs": 3, 00:12:01.520 "num_base_bdevs_discovered": 2, 00:12:01.520 "num_base_bdevs_operational": 3, 00:12:01.520 "base_bdevs_list": [ 00:12:01.520 { 00:12:01.520 "name": "BaseBdev1", 00:12:01.520 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:01.520 "is_configured": true, 00:12:01.520 "data_offset": 2048, 00:12:01.520 "data_size": 63488 00:12:01.520 }, 00:12:01.520 { 00:12:01.520 "name": null, 00:12:01.520 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:01.520 "is_configured": false, 00:12:01.520 "data_offset": 0, 00:12:01.520 "data_size": 63488 00:12:01.520 }, 00:12:01.520 { 00:12:01.520 "name": "BaseBdev3", 00:12:01.520 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:01.520 "is_configured": true, 00:12:01.520 "data_offset": 2048, 00:12:01.520 "data_size": 63488 00:12:01.520 } 00:12:01.520 ] 00:12:01.520 }' 00:12:01.520 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.520 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 [2024-11-20 08:45:32.834503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.088 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.089 "name": "Existed_Raid", 00:12:02.089 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:02.089 "strip_size_kb": 0, 00:12:02.089 "state": "configuring", 00:12:02.089 "raid_level": "raid1", 00:12:02.089 "superblock": true, 00:12:02.089 "num_base_bdevs": 3, 00:12:02.089 "num_base_bdevs_discovered": 1, 00:12:02.089 "num_base_bdevs_operational": 3, 00:12:02.089 "base_bdevs_list": [ 00:12:02.089 { 00:12:02.089 "name": "BaseBdev1", 00:12:02.089 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:02.089 "is_configured": true, 00:12:02.089 "data_offset": 2048, 00:12:02.089 "data_size": 63488 00:12:02.089 }, 00:12:02.089 { 00:12:02.089 "name": null, 00:12:02.089 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:02.089 "is_configured": false, 00:12:02.089 "data_offset": 0, 00:12:02.089 "data_size": 63488 00:12:02.089 }, 00:12:02.089 { 00:12:02.089 "name": null, 00:12:02.089 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:02.089 "is_configured": false, 00:12:02.089 "data_offset": 0, 00:12:02.089 "data_size": 63488 00:12:02.089 } 00:12:02.089 ] 00:12:02.089 }' 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.089 08:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 [2024-11-20 08:45:33.426761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.656 "name": "Existed_Raid", 00:12:02.656 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:02.656 "strip_size_kb": 0, 00:12:02.656 "state": "configuring", 00:12:02.656 "raid_level": "raid1", 00:12:02.656 "superblock": true, 00:12:02.656 "num_base_bdevs": 3, 00:12:02.656 "num_base_bdevs_discovered": 2, 00:12:02.656 "num_base_bdevs_operational": 3, 00:12:02.656 "base_bdevs_list": [ 00:12:02.656 { 00:12:02.656 "name": "BaseBdev1", 00:12:02.656 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:02.656 "is_configured": true, 00:12:02.656 "data_offset": 2048, 00:12:02.656 "data_size": 63488 00:12:02.656 }, 00:12:02.656 { 00:12:02.656 "name": null, 00:12:02.656 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:02.656 "is_configured": false, 00:12:02.656 "data_offset": 0, 00:12:02.656 "data_size": 63488 00:12:02.656 }, 00:12:02.656 { 00:12:02.656 "name": "BaseBdev3", 00:12:02.656 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:02.656 "is_configured": true, 00:12:02.656 "data_offset": 2048, 00:12:02.656 "data_size": 63488 00:12:02.656 } 00:12:02.656 ] 00:12:02.656 }' 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.656 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.223 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.223 [2024-11-20 08:45:33.994944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.223 "name": "Existed_Raid", 00:12:03.223 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:03.223 "strip_size_kb": 0, 00:12:03.223 "state": "configuring", 00:12:03.223 "raid_level": "raid1", 00:12:03.223 "superblock": true, 00:12:03.223 "num_base_bdevs": 3, 00:12:03.223 "num_base_bdevs_discovered": 1, 00:12:03.223 "num_base_bdevs_operational": 3, 00:12:03.223 "base_bdevs_list": [ 00:12:03.223 { 00:12:03.223 "name": null, 00:12:03.223 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:03.223 "is_configured": false, 00:12:03.223 "data_offset": 0, 00:12:03.223 "data_size": 63488 00:12:03.223 }, 00:12:03.223 { 00:12:03.223 "name": null, 00:12:03.223 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:03.223 "is_configured": false, 00:12:03.223 "data_offset": 0, 00:12:03.223 "data_size": 63488 00:12:03.223 }, 00:12:03.223 { 00:12:03.223 "name": "BaseBdev3", 00:12:03.223 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:03.223 "is_configured": true, 00:12:03.223 "data_offset": 2048, 00:12:03.223 "data_size": 63488 00:12:03.223 } 00:12:03.223 ] 00:12:03.223 }' 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.223 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.792 [2024-11-20 08:45:34.664408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.792 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.054 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.054 "name": "Existed_Raid", 00:12:04.054 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:04.054 "strip_size_kb": 0, 00:12:04.054 "state": "configuring", 00:12:04.054 "raid_level": "raid1", 00:12:04.054 "superblock": true, 00:12:04.054 "num_base_bdevs": 3, 00:12:04.054 "num_base_bdevs_discovered": 2, 00:12:04.054 "num_base_bdevs_operational": 3, 00:12:04.054 "base_bdevs_list": [ 00:12:04.054 { 00:12:04.054 "name": null, 00:12:04.054 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:04.054 "is_configured": false, 00:12:04.054 "data_offset": 0, 00:12:04.054 "data_size": 63488 00:12:04.054 }, 00:12:04.054 { 00:12:04.054 "name": "BaseBdev2", 00:12:04.054 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:04.054 "is_configured": true, 00:12:04.054 "data_offset": 2048, 00:12:04.054 "data_size": 63488 00:12:04.054 }, 00:12:04.054 { 00:12:04.054 "name": "BaseBdev3", 00:12:04.054 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:04.054 "is_configured": true, 00:12:04.054 "data_offset": 2048, 00:12:04.054 "data_size": 63488 00:12:04.054 } 00:12:04.054 ] 00:12:04.054 }' 00:12:04.054 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.054 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.313 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.313 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.313 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.313 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.313 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ef3e78d-0ba1-4585-b33a-813c3e8ac758 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.573 [2024-11-20 08:45:35.338635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.573 [2024-11-20 08:45:35.338958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.573 [2024-11-20 08:45:35.338976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.573 [2024-11-20 08:45:35.339343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:04.573 NewBaseBdev 00:12:04.573 [2024-11-20 08:45:35.339553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.573 [2024-11-20 08:45:35.339577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:04.573 [2024-11-20 08:45:35.339764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.573 [ 00:12:04.573 { 00:12:04.573 "name": "NewBaseBdev", 00:12:04.573 "aliases": [ 00:12:04.573 "6ef3e78d-0ba1-4585-b33a-813c3e8ac758" 00:12:04.573 ], 00:12:04.573 "product_name": "Malloc disk", 00:12:04.573 "block_size": 512, 00:12:04.573 "num_blocks": 65536, 00:12:04.573 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:04.573 "assigned_rate_limits": { 00:12:04.573 "rw_ios_per_sec": 0, 00:12:04.573 "rw_mbytes_per_sec": 0, 00:12:04.573 "r_mbytes_per_sec": 0, 00:12:04.573 "w_mbytes_per_sec": 0 00:12:04.573 }, 00:12:04.573 "claimed": true, 00:12:04.573 "claim_type": "exclusive_write", 00:12:04.573 "zoned": false, 00:12:04.573 "supported_io_types": { 00:12:04.573 "read": true, 00:12:04.573 "write": true, 00:12:04.573 "unmap": true, 00:12:04.573 "flush": true, 00:12:04.573 "reset": true, 00:12:04.573 "nvme_admin": false, 00:12:04.573 "nvme_io": false, 00:12:04.573 "nvme_io_md": false, 00:12:04.573 "write_zeroes": true, 00:12:04.573 "zcopy": true, 00:12:04.573 "get_zone_info": false, 00:12:04.573 "zone_management": false, 00:12:04.573 "zone_append": false, 00:12:04.573 "compare": false, 00:12:04.573 "compare_and_write": false, 00:12:04.573 "abort": true, 00:12:04.573 "seek_hole": false, 00:12:04.573 "seek_data": false, 00:12:04.573 "copy": true, 00:12:04.573 "nvme_iov_md": false 00:12:04.573 }, 00:12:04.573 "memory_domains": [ 00:12:04.573 { 00:12:04.573 "dma_device_id": "system", 00:12:04.573 "dma_device_type": 1 00:12:04.573 }, 00:12:04.573 { 00:12:04.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.573 "dma_device_type": 2 00:12:04.573 } 00:12:04.573 ], 00:12:04.573 "driver_specific": {} 00:12:04.573 } 00:12:04.573 ] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.573 "name": "Existed_Raid", 00:12:04.573 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:04.573 "strip_size_kb": 0, 00:12:04.573 "state": "online", 00:12:04.573 "raid_level": "raid1", 00:12:04.573 "superblock": true, 00:12:04.573 "num_base_bdevs": 3, 00:12:04.573 "num_base_bdevs_discovered": 3, 00:12:04.573 "num_base_bdevs_operational": 3, 00:12:04.573 "base_bdevs_list": [ 00:12:04.573 { 00:12:04.573 "name": "NewBaseBdev", 00:12:04.573 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:04.573 "is_configured": true, 00:12:04.573 "data_offset": 2048, 00:12:04.573 "data_size": 63488 00:12:04.573 }, 00:12:04.573 { 00:12:04.573 "name": "BaseBdev2", 00:12:04.573 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:04.573 "is_configured": true, 00:12:04.573 "data_offset": 2048, 00:12:04.573 "data_size": 63488 00:12:04.573 }, 00:12:04.573 { 00:12:04.573 "name": "BaseBdev3", 00:12:04.573 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:04.573 "is_configured": true, 00:12:04.573 "data_offset": 2048, 00:12:04.573 "data_size": 63488 00:12:04.573 } 00:12:04.573 ] 00:12:04.573 }' 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.573 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.141 [2024-11-20 08:45:35.863287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.141 "name": "Existed_Raid", 00:12:05.141 "aliases": [ 00:12:05.141 "c575b831-f805-431f-a2d7-8870b16ebcd1" 00:12:05.141 ], 00:12:05.141 "product_name": "Raid Volume", 00:12:05.141 "block_size": 512, 00:12:05.141 "num_blocks": 63488, 00:12:05.141 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:05.141 "assigned_rate_limits": { 00:12:05.141 "rw_ios_per_sec": 0, 00:12:05.141 "rw_mbytes_per_sec": 0, 00:12:05.141 "r_mbytes_per_sec": 0, 00:12:05.141 "w_mbytes_per_sec": 0 00:12:05.141 }, 00:12:05.141 "claimed": false, 00:12:05.141 "zoned": false, 00:12:05.141 "supported_io_types": { 00:12:05.141 "read": true, 00:12:05.141 "write": true, 00:12:05.141 "unmap": false, 00:12:05.141 "flush": false, 00:12:05.141 "reset": true, 00:12:05.141 "nvme_admin": false, 00:12:05.141 "nvme_io": false, 00:12:05.141 "nvme_io_md": false, 00:12:05.141 "write_zeroes": true, 00:12:05.141 "zcopy": false, 00:12:05.141 "get_zone_info": false, 00:12:05.141 "zone_management": false, 00:12:05.141 "zone_append": false, 00:12:05.141 "compare": false, 00:12:05.141 "compare_and_write": false, 00:12:05.141 "abort": false, 00:12:05.141 "seek_hole": false, 00:12:05.141 "seek_data": false, 00:12:05.141 "copy": false, 00:12:05.141 "nvme_iov_md": false 00:12:05.141 }, 00:12:05.141 "memory_domains": [ 00:12:05.141 { 00:12:05.141 "dma_device_id": "system", 00:12:05.141 "dma_device_type": 1 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.141 "dma_device_type": 2 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "dma_device_id": "system", 00:12:05.141 "dma_device_type": 1 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.141 "dma_device_type": 2 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "dma_device_id": "system", 00:12:05.141 "dma_device_type": 1 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.141 "dma_device_type": 2 00:12:05.141 } 00:12:05.141 ], 00:12:05.141 "driver_specific": { 00:12:05.141 "raid": { 00:12:05.141 "uuid": "c575b831-f805-431f-a2d7-8870b16ebcd1", 00:12:05.141 "strip_size_kb": 0, 00:12:05.141 "state": "online", 00:12:05.141 "raid_level": "raid1", 00:12:05.141 "superblock": true, 00:12:05.141 "num_base_bdevs": 3, 00:12:05.141 "num_base_bdevs_discovered": 3, 00:12:05.141 "num_base_bdevs_operational": 3, 00:12:05.141 "base_bdevs_list": [ 00:12:05.141 { 00:12:05.141 "name": "NewBaseBdev", 00:12:05.141 "uuid": "6ef3e78d-0ba1-4585-b33a-813c3e8ac758", 00:12:05.141 "is_configured": true, 00:12:05.141 "data_offset": 2048, 00:12:05.141 "data_size": 63488 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "name": "BaseBdev2", 00:12:05.141 "uuid": "9b33b0b0-2b75-470e-9c9f-243f56c2e893", 00:12:05.141 "is_configured": true, 00:12:05.141 "data_offset": 2048, 00:12:05.141 "data_size": 63488 00:12:05.141 }, 00:12:05.141 { 00:12:05.141 "name": "BaseBdev3", 00:12:05.141 "uuid": "da73ce0e-ade5-4aaa-aa9c-553672b356dc", 00:12:05.141 "is_configured": true, 00:12:05.141 "data_offset": 2048, 00:12:05.141 "data_size": 63488 00:12:05.141 } 00:12:05.141 ] 00:12:05.141 } 00:12:05.141 } 00:12:05.141 }' 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.141 BaseBdev2 00:12:05.141 BaseBdev3' 00:12:05.141 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.141 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.400 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.401 [2024-11-20 08:45:36.178952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.401 [2024-11-20 08:45:36.179140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.401 [2024-11-20 08:45:36.179272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.401 [2024-11-20 08:45:36.179670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.401 [2024-11-20 08:45:36.179690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68069 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68069 ']' 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68069 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68069 00:12:05.401 killing process with pid 68069 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68069' 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68069 00:12:05.401 [2024-11-20 08:45:36.219973] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.401 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68069 00:12:05.659 [2024-11-20 08:45:36.489555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.595 ************************************ 00:12:06.595 END TEST raid_state_function_test_sb 00:12:06.595 ************************************ 00:12:06.595 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:06.595 00:12:06.595 real 0m11.834s 00:12:06.595 user 0m19.591s 00:12:06.595 sys 0m1.689s 00:12:06.595 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.595 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.854 08:45:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:06.854 08:45:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:06.854 08:45:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.854 08:45:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.855 ************************************ 00:12:06.855 START TEST raid_superblock_test 00:12:06.855 ************************************ 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68706 00:12:06.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68706 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68706 ']' 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.855 08:45:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.855 [2024-11-20 08:45:37.687911] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:06.855 [2024-11-20 08:45:37.688522] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68706 ] 00:12:07.114 [2024-11-20 08:45:37.878449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.114 [2024-11-20 08:45:38.009528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.373 [2024-11-20 08:45:38.206153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.373 [2024-11-20 08:45:38.206482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 malloc1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 [2024-11-20 08:45:38.701603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.942 [2024-11-20 08:45:38.701710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.942 [2024-11-20 08:45:38.701744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:07.942 [2024-11-20 08:45:38.701759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.942 [2024-11-20 08:45:38.704676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.942 [2024-11-20 08:45:38.704724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.942 pt1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 malloc2 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 [2024-11-20 08:45:38.757888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.942 [2024-11-20 08:45:38.758136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.942 [2024-11-20 08:45:38.758232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:07.942 [2024-11-20 08:45:38.758412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.942 [2024-11-20 08:45:38.761322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.942 [2024-11-20 08:45:38.761509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.942 pt2 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 malloc3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 [2024-11-20 08:45:38.824779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.942 [2024-11-20 08:45:38.824879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.942 [2024-11-20 08:45:38.824914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:07.942 [2024-11-20 08:45:38.824930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.942 [2024-11-20 08:45:38.827843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.942 [2024-11-20 08:45:38.827893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.942 pt3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.942 [2024-11-20 08:45:38.836937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.942 [2024-11-20 08:45:38.839490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.942 [2024-11-20 08:45:38.839610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.942 [2024-11-20 08:45:38.839858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:07.942 [2024-11-20 08:45:38.839887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.942 [2024-11-20 08:45:38.840310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:07.942 [2024-11-20 08:45:38.840559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:07.942 [2024-11-20 08:45:38.840594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:07.942 [2024-11-20 08:45:38.840891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.942 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.943 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.943 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.943 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.201 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.201 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.201 "name": "raid_bdev1", 00:12:08.201 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:08.201 "strip_size_kb": 0, 00:12:08.201 "state": "online", 00:12:08.201 "raid_level": "raid1", 00:12:08.201 "superblock": true, 00:12:08.201 "num_base_bdevs": 3, 00:12:08.201 "num_base_bdevs_discovered": 3, 00:12:08.201 "num_base_bdevs_operational": 3, 00:12:08.201 "base_bdevs_list": [ 00:12:08.201 { 00:12:08.201 "name": "pt1", 00:12:08.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.201 "is_configured": true, 00:12:08.201 "data_offset": 2048, 00:12:08.201 "data_size": 63488 00:12:08.201 }, 00:12:08.201 { 00:12:08.201 "name": "pt2", 00:12:08.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.201 "is_configured": true, 00:12:08.201 "data_offset": 2048, 00:12:08.201 "data_size": 63488 00:12:08.201 }, 00:12:08.201 { 00:12:08.201 "name": "pt3", 00:12:08.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.201 "is_configured": true, 00:12:08.201 "data_offset": 2048, 00:12:08.201 "data_size": 63488 00:12:08.201 } 00:12:08.201 ] 00:12:08.201 }' 00:12:08.201 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.201 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.464 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.464 [2024-11-20 08:45:39.369415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.723 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.723 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.723 "name": "raid_bdev1", 00:12:08.723 "aliases": [ 00:12:08.723 "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21" 00:12:08.723 ], 00:12:08.723 "product_name": "Raid Volume", 00:12:08.723 "block_size": 512, 00:12:08.723 "num_blocks": 63488, 00:12:08.723 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:08.723 "assigned_rate_limits": { 00:12:08.723 "rw_ios_per_sec": 0, 00:12:08.723 "rw_mbytes_per_sec": 0, 00:12:08.723 "r_mbytes_per_sec": 0, 00:12:08.723 "w_mbytes_per_sec": 0 00:12:08.723 }, 00:12:08.723 "claimed": false, 00:12:08.723 "zoned": false, 00:12:08.723 "supported_io_types": { 00:12:08.723 "read": true, 00:12:08.723 "write": true, 00:12:08.723 "unmap": false, 00:12:08.723 "flush": false, 00:12:08.723 "reset": true, 00:12:08.723 "nvme_admin": false, 00:12:08.723 "nvme_io": false, 00:12:08.723 "nvme_io_md": false, 00:12:08.723 "write_zeroes": true, 00:12:08.723 "zcopy": false, 00:12:08.723 "get_zone_info": false, 00:12:08.723 "zone_management": false, 00:12:08.723 "zone_append": false, 00:12:08.723 "compare": false, 00:12:08.723 "compare_and_write": false, 00:12:08.723 "abort": false, 00:12:08.723 "seek_hole": false, 00:12:08.723 "seek_data": false, 00:12:08.723 "copy": false, 00:12:08.723 "nvme_iov_md": false 00:12:08.723 }, 00:12:08.723 "memory_domains": [ 00:12:08.723 { 00:12:08.723 "dma_device_id": "system", 00:12:08.723 "dma_device_type": 1 00:12:08.723 }, 00:12:08.723 { 00:12:08.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.724 "dma_device_type": 2 00:12:08.724 }, 00:12:08.724 { 00:12:08.724 "dma_device_id": "system", 00:12:08.724 "dma_device_type": 1 00:12:08.724 }, 00:12:08.724 { 00:12:08.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.724 "dma_device_type": 2 00:12:08.724 }, 00:12:08.724 { 00:12:08.724 "dma_device_id": "system", 00:12:08.724 "dma_device_type": 1 00:12:08.724 }, 00:12:08.724 { 00:12:08.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.724 "dma_device_type": 2 00:12:08.724 } 00:12:08.724 ], 00:12:08.724 "driver_specific": { 00:12:08.724 "raid": { 00:12:08.724 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:08.724 "strip_size_kb": 0, 00:12:08.724 "state": "online", 00:12:08.724 "raid_level": "raid1", 00:12:08.724 "superblock": true, 00:12:08.724 "num_base_bdevs": 3, 00:12:08.724 "num_base_bdevs_discovered": 3, 00:12:08.724 "num_base_bdevs_operational": 3, 00:12:08.724 "base_bdevs_list": [ 00:12:08.724 { 00:12:08.724 "name": "pt1", 00:12:08.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.724 "is_configured": true, 00:12:08.724 "data_offset": 2048, 00:12:08.724 "data_size": 63488 00:12:08.724 }, 00:12:08.724 { 00:12:08.724 "name": "pt2", 00:12:08.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.724 "is_configured": true, 00:12:08.724 "data_offset": 2048, 00:12:08.724 "data_size": 63488 00:12:08.724 }, 00:12:08.724 { 00:12:08.724 "name": "pt3", 00:12:08.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.724 "is_configured": true, 00:12:08.724 "data_offset": 2048, 00:12:08.724 "data_size": 63488 00:12:08.724 } 00:12:08.724 ] 00:12:08.724 } 00:12:08.724 } 00:12:08.724 }' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:08.724 pt2 00:12:08.724 pt3' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.724 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:08.983 [2024-11-20 08:45:39.689442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d2aee7cb-232e-4425-8afb-b8c5f6bdcd21 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d2aee7cb-232e-4425-8afb-b8c5f6bdcd21 ']' 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.983 [2024-11-20 08:45:39.741068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.983 [2024-11-20 08:45:39.741100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.983 [2024-11-20 08:45:39.741234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.983 [2024-11-20 08:45:39.741336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.983 [2024-11-20 08:45:39.741353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:08.983 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.984 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.243 [2024-11-20 08:45:39.897229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:09.244 [2024-11-20 08:45:39.899857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:09.244 [2024-11-20 08:45:39.899960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:09.244 [2024-11-20 08:45:39.900047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:09.244 [2024-11-20 08:45:39.900149] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:09.244 [2024-11-20 08:45:39.900226] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:09.244 [2024-11-20 08:45:39.900256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.244 [2024-11-20 08:45:39.900270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:09.244 request: 00:12:09.244 { 00:12:09.244 "name": "raid_bdev1", 00:12:09.244 "raid_level": "raid1", 00:12:09.244 "base_bdevs": [ 00:12:09.244 "malloc1", 00:12:09.244 "malloc2", 00:12:09.244 "malloc3" 00:12:09.244 ], 00:12:09.244 "superblock": false, 00:12:09.244 "method": "bdev_raid_create", 00:12:09.244 "req_id": 1 00:12:09.244 } 00:12:09.244 Got JSON-RPC error response 00:12:09.244 response: 00:12:09.244 { 00:12:09.244 "code": -17, 00:12:09.244 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:09.244 } 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.244 [2024-11-20 08:45:39.969139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:09.244 [2024-11-20 08:45:39.969380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.244 [2024-11-20 08:45:39.969543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:09.244 [2024-11-20 08:45:39.969681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.244 [2024-11-20 08:45:39.972661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.244 [2024-11-20 08:45:39.972842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:09.244 [2024-11-20 08:45:39.973059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:09.244 [2024-11-20 08:45:39.973137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:09.244 pt1 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.244 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.244 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.244 "name": "raid_bdev1", 00:12:09.244 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:09.244 "strip_size_kb": 0, 00:12:09.244 "state": "configuring", 00:12:09.244 "raid_level": "raid1", 00:12:09.244 "superblock": true, 00:12:09.244 "num_base_bdevs": 3, 00:12:09.244 "num_base_bdevs_discovered": 1, 00:12:09.244 "num_base_bdevs_operational": 3, 00:12:09.244 "base_bdevs_list": [ 00:12:09.244 { 00:12:09.244 "name": "pt1", 00:12:09.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.244 "is_configured": true, 00:12:09.244 "data_offset": 2048, 00:12:09.244 "data_size": 63488 00:12:09.244 }, 00:12:09.244 { 00:12:09.244 "name": null, 00:12:09.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.244 "is_configured": false, 00:12:09.244 "data_offset": 2048, 00:12:09.244 "data_size": 63488 00:12:09.244 }, 00:12:09.244 { 00:12:09.244 "name": null, 00:12:09.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.244 "is_configured": false, 00:12:09.244 "data_offset": 2048, 00:12:09.244 "data_size": 63488 00:12:09.244 } 00:12:09.244 ] 00:12:09.244 }' 00:12:09.244 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.244 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 [2024-11-20 08:45:40.505582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:09.813 [2024-11-20 08:45:40.505682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.813 [2024-11-20 08:45:40.505716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:09.813 [2024-11-20 08:45:40.505731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.813 [2024-11-20 08:45:40.506311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.813 [2024-11-20 08:45:40.506344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:09.813 [2024-11-20 08:45:40.506462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:09.813 [2024-11-20 08:45:40.506502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:09.813 pt2 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 [2024-11-20 08:45:40.517539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.813 "name": "raid_bdev1", 00:12:09.813 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:09.813 "strip_size_kb": 0, 00:12:09.813 "state": "configuring", 00:12:09.813 "raid_level": "raid1", 00:12:09.813 "superblock": true, 00:12:09.813 "num_base_bdevs": 3, 00:12:09.813 "num_base_bdevs_discovered": 1, 00:12:09.813 "num_base_bdevs_operational": 3, 00:12:09.813 "base_bdevs_list": [ 00:12:09.813 { 00:12:09.813 "name": "pt1", 00:12:09.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.813 "is_configured": true, 00:12:09.813 "data_offset": 2048, 00:12:09.813 "data_size": 63488 00:12:09.813 }, 00:12:09.813 { 00:12:09.813 "name": null, 00:12:09.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.813 "is_configured": false, 00:12:09.813 "data_offset": 0, 00:12:09.813 "data_size": 63488 00:12:09.813 }, 00:12:09.813 { 00:12:09.813 "name": null, 00:12:09.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.813 "is_configured": false, 00:12:09.813 "data_offset": 2048, 00:12:09.813 "data_size": 63488 00:12:09.813 } 00:12:09.813 ] 00:12:09.813 }' 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.813 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.381 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:10.381 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:10.381 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.381 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.381 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.381 [2024-11-20 08:45:41.033693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.381 [2024-11-20 08:45:41.033951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.382 [2024-11-20 08:45:41.033991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:10.382 [2024-11-20 08:45:41.034010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.382 [2024-11-20 08:45:41.034630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.382 [2024-11-20 08:45:41.034661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.382 [2024-11-20 08:45:41.034770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:10.382 [2024-11-20 08:45:41.034824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.382 pt2 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.382 [2024-11-20 08:45:41.045644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:10.382 [2024-11-20 08:45:41.045701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.382 [2024-11-20 08:45:41.045730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:10.382 [2024-11-20 08:45:41.045750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.382 [2024-11-20 08:45:41.046211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.382 [2024-11-20 08:45:41.046252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:10.382 [2024-11-20 08:45:41.046329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:10.382 [2024-11-20 08:45:41.046363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:10.382 [2024-11-20 08:45:41.046513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:10.382 [2024-11-20 08:45:41.046545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.382 [2024-11-20 08:45:41.046862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:10.382 [2024-11-20 08:45:41.047084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:10.382 [2024-11-20 08:45:41.047107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:10.382 [2024-11-20 08:45:41.047304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.382 pt3 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.382 "name": "raid_bdev1", 00:12:10.382 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:10.382 "strip_size_kb": 0, 00:12:10.382 "state": "online", 00:12:10.382 "raid_level": "raid1", 00:12:10.382 "superblock": true, 00:12:10.382 "num_base_bdevs": 3, 00:12:10.382 "num_base_bdevs_discovered": 3, 00:12:10.382 "num_base_bdevs_operational": 3, 00:12:10.382 "base_bdevs_list": [ 00:12:10.382 { 00:12:10.382 "name": "pt1", 00:12:10.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.382 "is_configured": true, 00:12:10.382 "data_offset": 2048, 00:12:10.382 "data_size": 63488 00:12:10.382 }, 00:12:10.382 { 00:12:10.382 "name": "pt2", 00:12:10.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.382 "is_configured": true, 00:12:10.382 "data_offset": 2048, 00:12:10.382 "data_size": 63488 00:12:10.382 }, 00:12:10.382 { 00:12:10.382 "name": "pt3", 00:12:10.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.382 "is_configured": true, 00:12:10.382 "data_offset": 2048, 00:12:10.382 "data_size": 63488 00:12:10.382 } 00:12:10.382 ] 00:12:10.382 }' 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.382 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.950 [2024-11-20 08:45:41.582240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.950 "name": "raid_bdev1", 00:12:10.950 "aliases": [ 00:12:10.950 "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21" 00:12:10.950 ], 00:12:10.950 "product_name": "Raid Volume", 00:12:10.950 "block_size": 512, 00:12:10.950 "num_blocks": 63488, 00:12:10.950 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:10.950 "assigned_rate_limits": { 00:12:10.950 "rw_ios_per_sec": 0, 00:12:10.950 "rw_mbytes_per_sec": 0, 00:12:10.950 "r_mbytes_per_sec": 0, 00:12:10.950 "w_mbytes_per_sec": 0 00:12:10.950 }, 00:12:10.950 "claimed": false, 00:12:10.950 "zoned": false, 00:12:10.950 "supported_io_types": { 00:12:10.950 "read": true, 00:12:10.950 "write": true, 00:12:10.950 "unmap": false, 00:12:10.950 "flush": false, 00:12:10.950 "reset": true, 00:12:10.950 "nvme_admin": false, 00:12:10.950 "nvme_io": false, 00:12:10.950 "nvme_io_md": false, 00:12:10.950 "write_zeroes": true, 00:12:10.950 "zcopy": false, 00:12:10.950 "get_zone_info": false, 00:12:10.950 "zone_management": false, 00:12:10.950 "zone_append": false, 00:12:10.950 "compare": false, 00:12:10.950 "compare_and_write": false, 00:12:10.950 "abort": false, 00:12:10.950 "seek_hole": false, 00:12:10.950 "seek_data": false, 00:12:10.950 "copy": false, 00:12:10.950 "nvme_iov_md": false 00:12:10.950 }, 00:12:10.950 "memory_domains": [ 00:12:10.950 { 00:12:10.950 "dma_device_id": "system", 00:12:10.950 "dma_device_type": 1 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.950 "dma_device_type": 2 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "dma_device_id": "system", 00:12:10.950 "dma_device_type": 1 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.950 "dma_device_type": 2 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "dma_device_id": "system", 00:12:10.950 "dma_device_type": 1 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.950 "dma_device_type": 2 00:12:10.950 } 00:12:10.950 ], 00:12:10.950 "driver_specific": { 00:12:10.950 "raid": { 00:12:10.950 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:10.950 "strip_size_kb": 0, 00:12:10.950 "state": "online", 00:12:10.950 "raid_level": "raid1", 00:12:10.950 "superblock": true, 00:12:10.950 "num_base_bdevs": 3, 00:12:10.950 "num_base_bdevs_discovered": 3, 00:12:10.950 "num_base_bdevs_operational": 3, 00:12:10.950 "base_bdevs_list": [ 00:12:10.950 { 00:12:10.950 "name": "pt1", 00:12:10.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.950 "is_configured": true, 00:12:10.950 "data_offset": 2048, 00:12:10.950 "data_size": 63488 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "name": "pt2", 00:12:10.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.950 "is_configured": true, 00:12:10.950 "data_offset": 2048, 00:12:10.950 "data_size": 63488 00:12:10.950 }, 00:12:10.950 { 00:12:10.950 "name": "pt3", 00:12:10.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.950 "is_configured": true, 00:12:10.950 "data_offset": 2048, 00:12:10.950 "data_size": 63488 00:12:10.950 } 00:12:10.950 ] 00:12:10.950 } 00:12:10.950 } 00:12:10.950 }' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:10.950 pt2 00:12:10.950 pt3' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:10.950 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.951 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.210 [2024-11-20 08:45:41.898248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d2aee7cb-232e-4425-8afb-b8c5f6bdcd21 '!=' d2aee7cb-232e-4425-8afb-b8c5f6bdcd21 ']' 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.210 [2024-11-20 08:45:41.945956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.210 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.210 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.210 "name": "raid_bdev1", 00:12:11.210 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:11.210 "strip_size_kb": 0, 00:12:11.210 "state": "online", 00:12:11.210 "raid_level": "raid1", 00:12:11.210 "superblock": true, 00:12:11.210 "num_base_bdevs": 3, 00:12:11.210 "num_base_bdevs_discovered": 2, 00:12:11.210 "num_base_bdevs_operational": 2, 00:12:11.210 "base_bdevs_list": [ 00:12:11.210 { 00:12:11.210 "name": null, 00:12:11.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.210 "is_configured": false, 00:12:11.210 "data_offset": 0, 00:12:11.210 "data_size": 63488 00:12:11.210 }, 00:12:11.210 { 00:12:11.210 "name": "pt2", 00:12:11.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.210 "is_configured": true, 00:12:11.210 "data_offset": 2048, 00:12:11.210 "data_size": 63488 00:12:11.210 }, 00:12:11.210 { 00:12:11.210 "name": "pt3", 00:12:11.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.210 "is_configured": true, 00:12:11.210 "data_offset": 2048, 00:12:11.210 "data_size": 63488 00:12:11.210 } 00:12:11.210 ] 00:12:11.210 }' 00:12:11.210 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.210 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 [2024-11-20 08:45:42.466047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.778 [2024-11-20 08:45:42.466294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.778 [2024-11-20 08:45:42.466421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.778 [2024-11-20 08:45:42.466504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.778 [2024-11-20 08:45:42.466527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 [2024-11-20 08:45:42.550014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:11.778 [2024-11-20 08:45:42.550111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.778 [2024-11-20 08:45:42.550137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:11.778 [2024-11-20 08:45:42.550171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.778 [2024-11-20 08:45:42.553117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.778 [2024-11-20 08:45:42.553215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:11.778 [2024-11-20 08:45:42.553327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:11.778 [2024-11-20 08:45:42.553392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:11.778 pt2 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.778 "name": "raid_bdev1", 00:12:11.778 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:11.778 "strip_size_kb": 0, 00:12:11.778 "state": "configuring", 00:12:11.778 "raid_level": "raid1", 00:12:11.778 "superblock": true, 00:12:11.778 "num_base_bdevs": 3, 00:12:11.778 "num_base_bdevs_discovered": 1, 00:12:11.778 "num_base_bdevs_operational": 2, 00:12:11.778 "base_bdevs_list": [ 00:12:11.778 { 00:12:11.778 "name": null, 00:12:11.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.778 "is_configured": false, 00:12:11.778 "data_offset": 2048, 00:12:11.778 "data_size": 63488 00:12:11.778 }, 00:12:11.778 { 00:12:11.778 "name": "pt2", 00:12:11.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.778 "is_configured": true, 00:12:11.778 "data_offset": 2048, 00:12:11.778 "data_size": 63488 00:12:11.778 }, 00:12:11.778 { 00:12:11.778 "name": null, 00:12:11.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.778 "is_configured": false, 00:12:11.778 "data_offset": 2048, 00:12:11.778 "data_size": 63488 00:12:11.778 } 00:12:11.778 ] 00:12:11.778 }' 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.778 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 [2024-11-20 08:45:43.078224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:12.387 [2024-11-20 08:45:43.078446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.387 [2024-11-20 08:45:43.078488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:12.387 [2024-11-20 08:45:43.078508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.387 [2024-11-20 08:45:43.079116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.387 [2024-11-20 08:45:43.079165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:12.387 [2024-11-20 08:45:43.079303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:12.387 [2024-11-20 08:45:43.079347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:12.387 [2024-11-20 08:45:43.079495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:12.387 [2024-11-20 08:45:43.079516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:12.387 [2024-11-20 08:45:43.079860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:12.387 [2024-11-20 08:45:43.080059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:12.387 [2024-11-20 08:45:43.080075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:12.387 [2024-11-20 08:45:43.080270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.387 pt3 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.387 "name": "raid_bdev1", 00:12:12.387 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:12.387 "strip_size_kb": 0, 00:12:12.387 "state": "online", 00:12:12.387 "raid_level": "raid1", 00:12:12.387 "superblock": true, 00:12:12.387 "num_base_bdevs": 3, 00:12:12.387 "num_base_bdevs_discovered": 2, 00:12:12.387 "num_base_bdevs_operational": 2, 00:12:12.387 "base_bdevs_list": [ 00:12:12.387 { 00:12:12.387 "name": null, 00:12:12.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.387 "is_configured": false, 00:12:12.387 "data_offset": 2048, 00:12:12.387 "data_size": 63488 00:12:12.387 }, 00:12:12.387 { 00:12:12.387 "name": "pt2", 00:12:12.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.387 "is_configured": true, 00:12:12.387 "data_offset": 2048, 00:12:12.387 "data_size": 63488 00:12:12.387 }, 00:12:12.387 { 00:12:12.387 "name": "pt3", 00:12:12.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.387 "is_configured": true, 00:12:12.387 "data_offset": 2048, 00:12:12.387 "data_size": 63488 00:12:12.387 } 00:12:12.387 ] 00:12:12.387 }' 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.387 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.956 [2024-11-20 08:45:43.614334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.956 [2024-11-20 08:45:43.614518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.956 [2024-11-20 08:45:43.614634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.956 [2024-11-20 08:45:43.614719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.956 [2024-11-20 08:45:43.614735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.956 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.957 [2024-11-20 08:45:43.678350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.957 [2024-11-20 08:45:43.678421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.957 [2024-11-20 08:45:43.678455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:12.957 [2024-11-20 08:45:43.678470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.957 [2024-11-20 08:45:43.681344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.957 [2024-11-20 08:45:43.681393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.957 [2024-11-20 08:45:43.681498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:12.957 [2024-11-20 08:45:43.681556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.957 [2024-11-20 08:45:43.681731] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:12.957 [2024-11-20 08:45:43.681749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.957 [2024-11-20 08:45:43.681772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:12.957 [2024-11-20 08:45:43.681843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.957 pt1 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.957 "name": "raid_bdev1", 00:12:12.957 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:12.957 "strip_size_kb": 0, 00:12:12.957 "state": "configuring", 00:12:12.957 "raid_level": "raid1", 00:12:12.957 "superblock": true, 00:12:12.957 "num_base_bdevs": 3, 00:12:12.957 "num_base_bdevs_discovered": 1, 00:12:12.957 "num_base_bdevs_operational": 2, 00:12:12.957 "base_bdevs_list": [ 00:12:12.957 { 00:12:12.957 "name": null, 00:12:12.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.957 "is_configured": false, 00:12:12.957 "data_offset": 2048, 00:12:12.957 "data_size": 63488 00:12:12.957 }, 00:12:12.957 { 00:12:12.957 "name": "pt2", 00:12:12.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.957 "is_configured": true, 00:12:12.957 "data_offset": 2048, 00:12:12.957 "data_size": 63488 00:12:12.957 }, 00:12:12.957 { 00:12:12.957 "name": null, 00:12:12.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.957 "is_configured": false, 00:12:12.957 "data_offset": 2048, 00:12:12.957 "data_size": 63488 00:12:12.957 } 00:12:12.957 ] 00:12:12.957 }' 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.957 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.528 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.528 [2024-11-20 08:45:44.250507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:13.528 [2024-11-20 08:45:44.250638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.528 [2024-11-20 08:45:44.250671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:13.528 [2024-11-20 08:45:44.250687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.528 [2024-11-20 08:45:44.251268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.528 [2024-11-20 08:45:44.251300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:13.528 [2024-11-20 08:45:44.251408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:13.529 [2024-11-20 08:45:44.251477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:13.529 [2024-11-20 08:45:44.251662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:13.529 [2024-11-20 08:45:44.251678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.529 [2024-11-20 08:45:44.251999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:13.529 [2024-11-20 08:45:44.252237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:13.529 [2024-11-20 08:45:44.252259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:13.529 [2024-11-20 08:45:44.252425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.529 pt3 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.529 "name": "raid_bdev1", 00:12:13.529 "uuid": "d2aee7cb-232e-4425-8afb-b8c5f6bdcd21", 00:12:13.529 "strip_size_kb": 0, 00:12:13.529 "state": "online", 00:12:13.529 "raid_level": "raid1", 00:12:13.529 "superblock": true, 00:12:13.529 "num_base_bdevs": 3, 00:12:13.529 "num_base_bdevs_discovered": 2, 00:12:13.529 "num_base_bdevs_operational": 2, 00:12:13.529 "base_bdevs_list": [ 00:12:13.529 { 00:12:13.529 "name": null, 00:12:13.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.529 "is_configured": false, 00:12:13.529 "data_offset": 2048, 00:12:13.529 "data_size": 63488 00:12:13.529 }, 00:12:13.529 { 00:12:13.529 "name": "pt2", 00:12:13.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.529 "is_configured": true, 00:12:13.529 "data_offset": 2048, 00:12:13.529 "data_size": 63488 00:12:13.529 }, 00:12:13.529 { 00:12:13.529 "name": "pt3", 00:12:13.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.529 "is_configured": true, 00:12:13.529 "data_offset": 2048, 00:12:13.529 "data_size": 63488 00:12:13.529 } 00:12:13.529 ] 00:12:13.529 }' 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.529 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.097 [2024-11-20 08:45:44.855015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d2aee7cb-232e-4425-8afb-b8c5f6bdcd21 '!=' d2aee7cb-232e-4425-8afb-b8c5f6bdcd21 ']' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68706 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68706 ']' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68706 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68706 00:12:14.097 killing process with pid 68706 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68706' 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68706 00:12:14.097 [2024-11-20 08:45:44.934061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.097 08:45:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68706 00:12:14.097 [2024-11-20 08:45:44.934211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.097 [2024-11-20 08:45:44.934306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.097 [2024-11-20 08:45:44.934327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:14.355 [2024-11-20 08:45:45.202343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.734 08:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:15.734 00:12:15.734 real 0m8.667s 00:12:15.734 user 0m14.167s 00:12:15.734 sys 0m1.230s 00:12:15.734 ************************************ 00:12:15.734 END TEST raid_superblock_test 00:12:15.734 ************************************ 00:12:15.734 08:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.734 08:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 08:45:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:15.734 08:45:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:15.734 08:45:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.734 08:45:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 ************************************ 00:12:15.734 START TEST raid_read_error_test 00:12:15.734 ************************************ 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MqjB6lrJeN 00:12:15.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69156 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69156 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69156 ']' 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.734 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.734 [2024-11-20 08:45:46.386071] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:15.734 [2024-11-20 08:45:46.386316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69156 ] 00:12:15.734 [2024-11-20 08:45:46.561373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.994 [2024-11-20 08:45:46.688706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.994 [2024-11-20 08:45:46.883537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.994 [2024-11-20 08:45:46.883597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.562 BaseBdev1_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.562 true 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.562 [2024-11-20 08:45:47.362063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:16.562 [2024-11-20 08:45:47.362137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.562 [2024-11-20 08:45:47.362197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:16.562 [2024-11-20 08:45:47.362219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.562 [2024-11-20 08:45:47.365056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.562 [2024-11-20 08:45:47.365111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.562 BaseBdev1 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.562 BaseBdev2_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.562 true 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.562 [2024-11-20 08:45:47.422840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:16.562 [2024-11-20 08:45:47.422943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.562 [2024-11-20 08:45:47.422970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:16.562 [2024-11-20 08:45:47.422987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.562 [2024-11-20 08:45:47.425808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.562 [2024-11-20 08:45:47.425856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.562 BaseBdev2 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.562 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.821 BaseBdev3_malloc 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.821 true 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.821 [2024-11-20 08:45:47.492720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:16.821 [2024-11-20 08:45:47.492825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.821 [2024-11-20 08:45:47.492853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:16.821 [2024-11-20 08:45:47.492871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.821 [2024-11-20 08:45:47.495755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.821 [2024-11-20 08:45:47.495810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:16.821 BaseBdev3 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.821 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.821 [2024-11-20 08:45:47.500801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.821 [2024-11-20 08:45:47.503373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.821 [2024-11-20 08:45:47.503481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.822 [2024-11-20 08:45:47.503809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:16.822 [2024-11-20 08:45:47.503829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.822 [2024-11-20 08:45:47.504169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:16.822 [2024-11-20 08:45:47.504451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:16.822 [2024-11-20 08:45:47.504473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:16.822 [2024-11-20 08:45:47.504706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.822 "name": "raid_bdev1", 00:12:16.822 "uuid": "98befb45-c849-42df-b401-7a69aa564a19", 00:12:16.822 "strip_size_kb": 0, 00:12:16.822 "state": "online", 00:12:16.822 "raid_level": "raid1", 00:12:16.822 "superblock": true, 00:12:16.822 "num_base_bdevs": 3, 00:12:16.822 "num_base_bdevs_discovered": 3, 00:12:16.822 "num_base_bdevs_operational": 3, 00:12:16.822 "base_bdevs_list": [ 00:12:16.822 { 00:12:16.822 "name": "BaseBdev1", 00:12:16.822 "uuid": "0cad692b-6c15-5754-b142-628757086954", 00:12:16.822 "is_configured": true, 00:12:16.822 "data_offset": 2048, 00:12:16.822 "data_size": 63488 00:12:16.822 }, 00:12:16.822 { 00:12:16.822 "name": "BaseBdev2", 00:12:16.822 "uuid": "724cdcdb-4b6b-5339-a6ae-7df69d93e365", 00:12:16.822 "is_configured": true, 00:12:16.822 "data_offset": 2048, 00:12:16.822 "data_size": 63488 00:12:16.822 }, 00:12:16.822 { 00:12:16.822 "name": "BaseBdev3", 00:12:16.822 "uuid": "93a2859d-5656-5569-b813-cb36b5b423a6", 00:12:16.822 "is_configured": true, 00:12:16.822 "data_offset": 2048, 00:12:16.822 "data_size": 63488 00:12:16.822 } 00:12:16.822 ] 00:12:16.822 }' 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.822 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.390 08:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:17.390 08:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:17.390 [2024-11-20 08:45:48.110452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.327 "name": "raid_bdev1", 00:12:18.327 "uuid": "98befb45-c849-42df-b401-7a69aa564a19", 00:12:18.327 "strip_size_kb": 0, 00:12:18.327 "state": "online", 00:12:18.327 "raid_level": "raid1", 00:12:18.327 "superblock": true, 00:12:18.327 "num_base_bdevs": 3, 00:12:18.327 "num_base_bdevs_discovered": 3, 00:12:18.327 "num_base_bdevs_operational": 3, 00:12:18.327 "base_bdevs_list": [ 00:12:18.327 { 00:12:18.327 "name": "BaseBdev1", 00:12:18.327 "uuid": "0cad692b-6c15-5754-b142-628757086954", 00:12:18.327 "is_configured": true, 00:12:18.327 "data_offset": 2048, 00:12:18.327 "data_size": 63488 00:12:18.327 }, 00:12:18.327 { 00:12:18.327 "name": "BaseBdev2", 00:12:18.327 "uuid": "724cdcdb-4b6b-5339-a6ae-7df69d93e365", 00:12:18.327 "is_configured": true, 00:12:18.327 "data_offset": 2048, 00:12:18.327 "data_size": 63488 00:12:18.327 }, 00:12:18.327 { 00:12:18.327 "name": "BaseBdev3", 00:12:18.327 "uuid": "93a2859d-5656-5569-b813-cb36b5b423a6", 00:12:18.327 "is_configured": true, 00:12:18.327 "data_offset": 2048, 00:12:18.327 "data_size": 63488 00:12:18.327 } 00:12:18.327 ] 00:12:18.327 }' 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.327 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.893 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.894 [2024-11-20 08:45:49.551883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.894 [2024-11-20 08:45:49.552081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.894 [2024-11-20 08:45:49.555491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.894 [2024-11-20 08:45:49.555567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.894 [2024-11-20 08:45:49.555732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.894 [2024-11-20 08:45:49.555750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:18.894 { 00:12:18.894 "results": [ 00:12:18.894 { 00:12:18.894 "job": "raid_bdev1", 00:12:18.894 "core_mask": "0x1", 00:12:18.894 "workload": "randrw", 00:12:18.894 "percentage": 50, 00:12:18.894 "status": "finished", 00:12:18.894 "queue_depth": 1, 00:12:18.894 "io_size": 131072, 00:12:18.894 "runtime": 1.43841, 00:12:18.894 "iops": 9573.070265084363, 00:12:18.894 "mibps": 1196.6337831355454, 00:12:18.894 "io_failed": 0, 00:12:18.894 "io_timeout": 0, 00:12:18.894 "avg_latency_us": 100.33331590413943, 00:12:18.894 "min_latency_us": 40.261818181818185, 00:12:18.894 "max_latency_us": 1861.8181818181818 00:12:18.894 } 00:12:18.894 ], 00:12:18.894 "core_count": 1 00:12:18.894 } 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69156 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69156 ']' 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69156 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69156 00:12:18.894 killing process with pid 69156 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69156' 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69156 00:12:18.894 [2024-11-20 08:45:49.589872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.894 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69156 00:12:18.894 [2024-11-20 08:45:49.786106] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MqjB6lrJeN 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:20.305 ************************************ 00:12:20.305 END TEST raid_read_error_test 00:12:20.305 ************************************ 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:20.305 00:12:20.305 real 0m4.590s 00:12:20.305 user 0m5.646s 00:12:20.305 sys 0m0.553s 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.305 08:45:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 08:45:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:20.305 08:45:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.305 08:45:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.305 08:45:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 ************************************ 00:12:20.305 START TEST raid_write_error_test 00:12:20.305 ************************************ 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MNatbnIvkZ 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69303 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69303 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69303 ']' 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.305 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 [2024-11-20 08:45:51.025822] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:20.305 [2024-11-20 08:45:51.025986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69303 ] 00:12:20.305 [2024-11-20 08:45:51.200113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.564 [2024-11-20 08:45:51.328716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.822 [2024-11-20 08:45:51.526028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.822 [2024-11-20 08:45:51.526112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 BaseBdev1_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 true 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 [2024-11-20 08:45:52.057559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:21.390 [2024-11-20 08:45:52.057812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.390 [2024-11-20 08:45:52.057855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:21.390 [2024-11-20 08:45:52.057876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.390 [2024-11-20 08:45:52.060846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.390 [2024-11-20 08:45:52.061059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:21.390 BaseBdev1 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 BaseBdev2_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 true 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 [2024-11-20 08:45:52.121375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.390 [2024-11-20 08:45:52.121655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.390 [2024-11-20 08:45:52.121729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:21.390 [2024-11-20 08:45:52.121990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.390 [2024-11-20 08:45:52.124938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.390 [2024-11-20 08:45:52.125150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.390 BaseBdev2 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.390 BaseBdev3_malloc 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.390 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.391 true 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.391 [2024-11-20 08:45:52.212916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:21.391 [2024-11-20 08:45:52.213010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.391 [2024-11-20 08:45:52.213040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:21.391 [2024-11-20 08:45:52.213058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.391 [2024-11-20 08:45:52.216044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.391 [2024-11-20 08:45:52.216290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.391 BaseBdev3 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.391 [2024-11-20 08:45:52.225032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.391 [2024-11-20 08:45:52.227574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.391 [2024-11-20 08:45:52.227864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.391 [2024-11-20 08:45:52.228230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.391 [2024-11-20 08:45:52.228250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.391 [2024-11-20 08:45:52.228631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:21.391 [2024-11-20 08:45:52.228890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.391 [2024-11-20 08:45:52.228911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:21.391 [2024-11-20 08:45:52.229213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.391 "name": "raid_bdev1", 00:12:21.391 "uuid": "53aff132-8ed1-4fa4-9f72-1ea06571ed24", 00:12:21.391 "strip_size_kb": 0, 00:12:21.391 "state": "online", 00:12:21.391 "raid_level": "raid1", 00:12:21.391 "superblock": true, 00:12:21.391 "num_base_bdevs": 3, 00:12:21.391 "num_base_bdevs_discovered": 3, 00:12:21.391 "num_base_bdevs_operational": 3, 00:12:21.391 "base_bdevs_list": [ 00:12:21.391 { 00:12:21.391 "name": "BaseBdev1", 00:12:21.391 "uuid": "0ef39d21-7f39-528a-a8f4-0a77c577304d", 00:12:21.391 "is_configured": true, 00:12:21.391 "data_offset": 2048, 00:12:21.391 "data_size": 63488 00:12:21.391 }, 00:12:21.391 { 00:12:21.391 "name": "BaseBdev2", 00:12:21.391 "uuid": "2eda4b0b-97c7-51f0-84fd-fa7aad84e772", 00:12:21.391 "is_configured": true, 00:12:21.391 "data_offset": 2048, 00:12:21.391 "data_size": 63488 00:12:21.391 }, 00:12:21.391 { 00:12:21.391 "name": "BaseBdev3", 00:12:21.391 "uuid": "3019b241-c3db-5d25-b06f-4912eb9deec9", 00:12:21.391 "is_configured": true, 00:12:21.391 "data_offset": 2048, 00:12:21.391 "data_size": 63488 00:12:21.391 } 00:12:21.391 ] 00:12:21.391 }' 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.391 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.959 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:21.959 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:22.217 [2024-11-20 08:45:52.918812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.152 [2024-11-20 08:45:53.772005] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:23.152 [2024-11-20 08:45:53.772077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.152 [2024-11-20 08:45:53.772367] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.152 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.153 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.153 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.153 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.153 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.153 "name": "raid_bdev1", 00:12:23.153 "uuid": "53aff132-8ed1-4fa4-9f72-1ea06571ed24", 00:12:23.153 "strip_size_kb": 0, 00:12:23.153 "state": "online", 00:12:23.153 "raid_level": "raid1", 00:12:23.153 "superblock": true, 00:12:23.153 "num_base_bdevs": 3, 00:12:23.153 "num_base_bdevs_discovered": 2, 00:12:23.153 "num_base_bdevs_operational": 2, 00:12:23.153 "base_bdevs_list": [ 00:12:23.153 { 00:12:23.153 "name": null, 00:12:23.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.153 "is_configured": false, 00:12:23.153 "data_offset": 0, 00:12:23.153 "data_size": 63488 00:12:23.153 }, 00:12:23.153 { 00:12:23.153 "name": "BaseBdev2", 00:12:23.153 "uuid": "2eda4b0b-97c7-51f0-84fd-fa7aad84e772", 00:12:23.153 "is_configured": true, 00:12:23.153 "data_offset": 2048, 00:12:23.153 "data_size": 63488 00:12:23.153 }, 00:12:23.153 { 00:12:23.153 "name": "BaseBdev3", 00:12:23.153 "uuid": "3019b241-c3db-5d25-b06f-4912eb9deec9", 00:12:23.153 "is_configured": true, 00:12:23.153 "data_offset": 2048, 00:12:23.153 "data_size": 63488 00:12:23.153 } 00:12:23.153 ] 00:12:23.153 }' 00:12:23.153 08:45:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.153 08:45:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.411 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.411 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.411 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.411 [2024-11-20 08:45:54.297372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.411 [2024-11-20 08:45:54.297414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.411 [2024-11-20 08:45:54.301047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.412 [2024-11-20 08:45:54.301269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.412 [2024-11-20 08:45:54.301606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.412 [2024-11-20 08:45:54.301784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:23.412 { 00:12:23.412 "results": [ 00:12:23.412 { 00:12:23.412 "job": "raid_bdev1", 00:12:23.412 "core_mask": "0x1", 00:12:23.412 "workload": "randrw", 00:12:23.412 "percentage": 50, 00:12:23.412 "status": "finished", 00:12:23.412 "queue_depth": 1, 00:12:23.412 "io_size": 131072, 00:12:23.412 "runtime": 1.375869, 00:12:23.412 "iops": 10466.839502888719, 00:12:23.412 "mibps": 1308.3549378610899, 00:12:23.412 "io_failed": 0, 00:12:23.412 "io_timeout": 0, 00:12:23.412 "avg_latency_us": 91.23143418070715, 00:12:23.412 "min_latency_us": 40.261818181818185, 00:12:23.412 "max_latency_us": 1854.370909090909 00:12:23.412 } 00:12:23.412 ], 00:12:23.412 "core_count": 1 00:12:23.412 } 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69303 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69303 ']' 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69303 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.412 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69303 00:12:23.670 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.670 killing process with pid 69303 00:12:23.670 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.670 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69303' 00:12:23.670 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69303 00:12:23.670 [2024-11-20 08:45:54.343782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.670 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69303 00:12:23.670 [2024-11-20 08:45:54.554976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MNatbnIvkZ 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:25.046 ************************************ 00:12:25.046 END TEST raid_write_error_test 00:12:25.046 ************************************ 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:25.046 00:12:25.046 real 0m4.717s 00:12:25.046 user 0m5.864s 00:12:25.046 sys 0m0.586s 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.046 08:45:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.046 08:45:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:25.046 08:45:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:25.046 08:45:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:25.046 08:45:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:25.046 08:45:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.046 08:45:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.046 ************************************ 00:12:25.046 START TEST raid_state_function_test 00:12:25.046 ************************************ 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69441 00:12:25.046 Process raid pid: 69441 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69441' 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69441 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69441 ']' 00:12:25.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.046 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.046 [2024-11-20 08:45:55.798913] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:25.047 [2024-11-20 08:45:55.799329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.305 [2024-11-20 08:45:55.974892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.305 [2024-11-20 08:45:56.107252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.565 [2024-11-20 08:45:56.320601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.565 [2024-11-20 08:45:56.320650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.133 [2024-11-20 08:45:56.777056] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.133 [2024-11-20 08:45:56.777140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.133 [2024-11-20 08:45:56.777207] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:26.133 [2024-11-20 08:45:56.777227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:26.133 [2024-11-20 08:45:56.777237] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:26.133 [2024-11-20 08:45:56.777252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:26.133 [2024-11-20 08:45:56.777262] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:26.133 [2024-11-20 08:45:56.777276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.133 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.134 "name": "Existed_Raid", 00:12:26.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.134 "strip_size_kb": 64, 00:12:26.134 "state": "configuring", 00:12:26.134 "raid_level": "raid0", 00:12:26.134 "superblock": false, 00:12:26.134 "num_base_bdevs": 4, 00:12:26.134 "num_base_bdevs_discovered": 0, 00:12:26.134 "num_base_bdevs_operational": 4, 00:12:26.134 "base_bdevs_list": [ 00:12:26.134 { 00:12:26.134 "name": "BaseBdev1", 00:12:26.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.134 "is_configured": false, 00:12:26.134 "data_offset": 0, 00:12:26.134 "data_size": 0 00:12:26.134 }, 00:12:26.134 { 00:12:26.134 "name": "BaseBdev2", 00:12:26.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.134 "is_configured": false, 00:12:26.134 "data_offset": 0, 00:12:26.134 "data_size": 0 00:12:26.134 }, 00:12:26.134 { 00:12:26.134 "name": "BaseBdev3", 00:12:26.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.134 "is_configured": false, 00:12:26.134 "data_offset": 0, 00:12:26.134 "data_size": 0 00:12:26.134 }, 00:12:26.134 { 00:12:26.134 "name": "BaseBdev4", 00:12:26.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.134 "is_configured": false, 00:12:26.134 "data_offset": 0, 00:12:26.134 "data_size": 0 00:12:26.134 } 00:12:26.134 ] 00:12:26.134 }' 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.134 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 [2024-11-20 08:45:57.349233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:26.702 [2024-11-20 08:45:57.349296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.702 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 [2024-11-20 08:45:57.357175] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.702 [2024-11-20 08:45:57.357425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.702 [2024-11-20 08:45:57.357454] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:26.703 [2024-11-20 08:45:57.357474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:26.703 [2024-11-20 08:45:57.357484] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:26.703 [2024-11-20 08:45:57.357498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:26.703 [2024-11-20 08:45:57.357508] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:26.703 [2024-11-20 08:45:57.357521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.703 [2024-11-20 08:45:57.404225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.703 BaseBdev1 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.703 [ 00:12:26.703 { 00:12:26.703 "name": "BaseBdev1", 00:12:26.703 "aliases": [ 00:12:26.703 "522e941f-eee4-4078-8bf0-c5f83f3254d6" 00:12:26.703 ], 00:12:26.703 "product_name": "Malloc disk", 00:12:26.703 "block_size": 512, 00:12:26.703 "num_blocks": 65536, 00:12:26.703 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:26.703 "assigned_rate_limits": { 00:12:26.703 "rw_ios_per_sec": 0, 00:12:26.703 "rw_mbytes_per_sec": 0, 00:12:26.703 "r_mbytes_per_sec": 0, 00:12:26.703 "w_mbytes_per_sec": 0 00:12:26.703 }, 00:12:26.703 "claimed": true, 00:12:26.703 "claim_type": "exclusive_write", 00:12:26.703 "zoned": false, 00:12:26.703 "supported_io_types": { 00:12:26.703 "read": true, 00:12:26.703 "write": true, 00:12:26.703 "unmap": true, 00:12:26.703 "flush": true, 00:12:26.703 "reset": true, 00:12:26.703 "nvme_admin": false, 00:12:26.703 "nvme_io": false, 00:12:26.703 "nvme_io_md": false, 00:12:26.703 "write_zeroes": true, 00:12:26.703 "zcopy": true, 00:12:26.703 "get_zone_info": false, 00:12:26.703 "zone_management": false, 00:12:26.703 "zone_append": false, 00:12:26.703 "compare": false, 00:12:26.703 "compare_and_write": false, 00:12:26.703 "abort": true, 00:12:26.703 "seek_hole": false, 00:12:26.703 "seek_data": false, 00:12:26.703 "copy": true, 00:12:26.703 "nvme_iov_md": false 00:12:26.703 }, 00:12:26.703 "memory_domains": [ 00:12:26.703 { 00:12:26.703 "dma_device_id": "system", 00:12:26.703 "dma_device_type": 1 00:12:26.703 }, 00:12:26.703 { 00:12:26.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.703 "dma_device_type": 2 00:12:26.703 } 00:12:26.703 ], 00:12:26.703 "driver_specific": {} 00:12:26.703 } 00:12:26.703 ] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.703 "name": "Existed_Raid", 00:12:26.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.703 "strip_size_kb": 64, 00:12:26.703 "state": "configuring", 00:12:26.703 "raid_level": "raid0", 00:12:26.703 "superblock": false, 00:12:26.703 "num_base_bdevs": 4, 00:12:26.703 "num_base_bdevs_discovered": 1, 00:12:26.703 "num_base_bdevs_operational": 4, 00:12:26.703 "base_bdevs_list": [ 00:12:26.703 { 00:12:26.703 "name": "BaseBdev1", 00:12:26.703 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:26.703 "is_configured": true, 00:12:26.703 "data_offset": 0, 00:12:26.703 "data_size": 65536 00:12:26.703 }, 00:12:26.703 { 00:12:26.703 "name": "BaseBdev2", 00:12:26.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.703 "is_configured": false, 00:12:26.703 "data_offset": 0, 00:12:26.703 "data_size": 0 00:12:26.703 }, 00:12:26.703 { 00:12:26.703 "name": "BaseBdev3", 00:12:26.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.703 "is_configured": false, 00:12:26.703 "data_offset": 0, 00:12:26.703 "data_size": 0 00:12:26.703 }, 00:12:26.703 { 00:12:26.703 "name": "BaseBdev4", 00:12:26.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.703 "is_configured": false, 00:12:26.703 "data_offset": 0, 00:12:26.703 "data_size": 0 00:12:26.703 } 00:12:26.703 ] 00:12:26.703 }' 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.703 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.271 [2024-11-20 08:45:57.960463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:27.271 [2024-11-20 08:45:57.960527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.271 [2024-11-20 08:45:57.968506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.271 [2024-11-20 08:45:57.971281] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.271 [2024-11-20 08:45:57.971470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.271 [2024-11-20 08:45:57.971596] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:27.271 [2024-11-20 08:45:57.971752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:27.271 [2024-11-20 08:45:57.971869] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:27.271 [2024-11-20 08:45:57.971949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.271 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.271 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.271 "name": "Existed_Raid", 00:12:27.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.271 "strip_size_kb": 64, 00:12:27.271 "state": "configuring", 00:12:27.271 "raid_level": "raid0", 00:12:27.271 "superblock": false, 00:12:27.271 "num_base_bdevs": 4, 00:12:27.271 "num_base_bdevs_discovered": 1, 00:12:27.271 "num_base_bdevs_operational": 4, 00:12:27.271 "base_bdevs_list": [ 00:12:27.271 { 00:12:27.271 "name": "BaseBdev1", 00:12:27.271 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:27.271 "is_configured": true, 00:12:27.271 "data_offset": 0, 00:12:27.271 "data_size": 65536 00:12:27.271 }, 00:12:27.271 { 00:12:27.271 "name": "BaseBdev2", 00:12:27.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.271 "is_configured": false, 00:12:27.271 "data_offset": 0, 00:12:27.271 "data_size": 0 00:12:27.271 }, 00:12:27.271 { 00:12:27.271 "name": "BaseBdev3", 00:12:27.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.271 "is_configured": false, 00:12:27.271 "data_offset": 0, 00:12:27.271 "data_size": 0 00:12:27.271 }, 00:12:27.272 { 00:12:27.272 "name": "BaseBdev4", 00:12:27.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.272 "is_configured": false, 00:12:27.272 "data_offset": 0, 00:12:27.272 "data_size": 0 00:12:27.272 } 00:12:27.272 ] 00:12:27.272 }' 00:12:27.272 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.272 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.839 [2024-11-20 08:45:58.507143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.839 BaseBdev2 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.839 [ 00:12:27.839 { 00:12:27.839 "name": "BaseBdev2", 00:12:27.839 "aliases": [ 00:12:27.839 "30cdaf03-0b43-406e-a6e5-586f9f2e2945" 00:12:27.839 ], 00:12:27.839 "product_name": "Malloc disk", 00:12:27.839 "block_size": 512, 00:12:27.839 "num_blocks": 65536, 00:12:27.839 "uuid": "30cdaf03-0b43-406e-a6e5-586f9f2e2945", 00:12:27.839 "assigned_rate_limits": { 00:12:27.839 "rw_ios_per_sec": 0, 00:12:27.839 "rw_mbytes_per_sec": 0, 00:12:27.839 "r_mbytes_per_sec": 0, 00:12:27.839 "w_mbytes_per_sec": 0 00:12:27.839 }, 00:12:27.839 "claimed": true, 00:12:27.839 "claim_type": "exclusive_write", 00:12:27.839 "zoned": false, 00:12:27.839 "supported_io_types": { 00:12:27.839 "read": true, 00:12:27.839 "write": true, 00:12:27.839 "unmap": true, 00:12:27.839 "flush": true, 00:12:27.839 "reset": true, 00:12:27.839 "nvme_admin": false, 00:12:27.839 "nvme_io": false, 00:12:27.839 "nvme_io_md": false, 00:12:27.839 "write_zeroes": true, 00:12:27.839 "zcopy": true, 00:12:27.839 "get_zone_info": false, 00:12:27.839 "zone_management": false, 00:12:27.839 "zone_append": false, 00:12:27.839 "compare": false, 00:12:27.839 "compare_and_write": false, 00:12:27.839 "abort": true, 00:12:27.839 "seek_hole": false, 00:12:27.839 "seek_data": false, 00:12:27.839 "copy": true, 00:12:27.839 "nvme_iov_md": false 00:12:27.839 }, 00:12:27.839 "memory_domains": [ 00:12:27.839 { 00:12:27.839 "dma_device_id": "system", 00:12:27.839 "dma_device_type": 1 00:12:27.839 }, 00:12:27.839 { 00:12:27.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.839 "dma_device_type": 2 00:12:27.839 } 00:12:27.839 ], 00:12:27.839 "driver_specific": {} 00:12:27.839 } 00:12:27.839 ] 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.839 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.839 "name": "Existed_Raid", 00:12:27.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.839 "strip_size_kb": 64, 00:12:27.839 "state": "configuring", 00:12:27.839 "raid_level": "raid0", 00:12:27.839 "superblock": false, 00:12:27.839 "num_base_bdevs": 4, 00:12:27.839 "num_base_bdevs_discovered": 2, 00:12:27.839 "num_base_bdevs_operational": 4, 00:12:27.839 "base_bdevs_list": [ 00:12:27.839 { 00:12:27.839 "name": "BaseBdev1", 00:12:27.839 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:27.839 "is_configured": true, 00:12:27.839 "data_offset": 0, 00:12:27.839 "data_size": 65536 00:12:27.840 }, 00:12:27.840 { 00:12:27.840 "name": "BaseBdev2", 00:12:27.840 "uuid": "30cdaf03-0b43-406e-a6e5-586f9f2e2945", 00:12:27.840 "is_configured": true, 00:12:27.840 "data_offset": 0, 00:12:27.840 "data_size": 65536 00:12:27.840 }, 00:12:27.840 { 00:12:27.840 "name": "BaseBdev3", 00:12:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.840 "is_configured": false, 00:12:27.840 "data_offset": 0, 00:12:27.840 "data_size": 0 00:12:27.840 }, 00:12:27.840 { 00:12:27.840 "name": "BaseBdev4", 00:12:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.840 "is_configured": false, 00:12:27.840 "data_offset": 0, 00:12:27.840 "data_size": 0 00:12:27.840 } 00:12:27.840 ] 00:12:27.840 }' 00:12:27.840 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.840 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.408 [2024-11-20 08:45:59.105375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.408 BaseBdev3 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.408 [ 00:12:28.408 { 00:12:28.408 "name": "BaseBdev3", 00:12:28.408 "aliases": [ 00:12:28.408 "0f1927e5-6df9-465b-b801-33f229efd33d" 00:12:28.408 ], 00:12:28.408 "product_name": "Malloc disk", 00:12:28.408 "block_size": 512, 00:12:28.408 "num_blocks": 65536, 00:12:28.408 "uuid": "0f1927e5-6df9-465b-b801-33f229efd33d", 00:12:28.408 "assigned_rate_limits": { 00:12:28.408 "rw_ios_per_sec": 0, 00:12:28.408 "rw_mbytes_per_sec": 0, 00:12:28.408 "r_mbytes_per_sec": 0, 00:12:28.408 "w_mbytes_per_sec": 0 00:12:28.408 }, 00:12:28.408 "claimed": true, 00:12:28.408 "claim_type": "exclusive_write", 00:12:28.408 "zoned": false, 00:12:28.408 "supported_io_types": { 00:12:28.408 "read": true, 00:12:28.408 "write": true, 00:12:28.408 "unmap": true, 00:12:28.408 "flush": true, 00:12:28.408 "reset": true, 00:12:28.408 "nvme_admin": false, 00:12:28.408 "nvme_io": false, 00:12:28.408 "nvme_io_md": false, 00:12:28.408 "write_zeroes": true, 00:12:28.408 "zcopy": true, 00:12:28.408 "get_zone_info": false, 00:12:28.408 "zone_management": false, 00:12:28.408 "zone_append": false, 00:12:28.408 "compare": false, 00:12:28.408 "compare_and_write": false, 00:12:28.408 "abort": true, 00:12:28.408 "seek_hole": false, 00:12:28.408 "seek_data": false, 00:12:28.408 "copy": true, 00:12:28.408 "nvme_iov_md": false 00:12:28.408 }, 00:12:28.408 "memory_domains": [ 00:12:28.408 { 00:12:28.408 "dma_device_id": "system", 00:12:28.408 "dma_device_type": 1 00:12:28.408 }, 00:12:28.408 { 00:12:28.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.408 "dma_device_type": 2 00:12:28.408 } 00:12:28.408 ], 00:12:28.408 "driver_specific": {} 00:12:28.408 } 00:12:28.408 ] 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.408 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.409 "name": "Existed_Raid", 00:12:28.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.409 "strip_size_kb": 64, 00:12:28.409 "state": "configuring", 00:12:28.409 "raid_level": "raid0", 00:12:28.409 "superblock": false, 00:12:28.409 "num_base_bdevs": 4, 00:12:28.409 "num_base_bdevs_discovered": 3, 00:12:28.409 "num_base_bdevs_operational": 4, 00:12:28.409 "base_bdevs_list": [ 00:12:28.409 { 00:12:28.409 "name": "BaseBdev1", 00:12:28.409 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:28.409 "is_configured": true, 00:12:28.409 "data_offset": 0, 00:12:28.409 "data_size": 65536 00:12:28.409 }, 00:12:28.409 { 00:12:28.409 "name": "BaseBdev2", 00:12:28.409 "uuid": "30cdaf03-0b43-406e-a6e5-586f9f2e2945", 00:12:28.409 "is_configured": true, 00:12:28.409 "data_offset": 0, 00:12:28.409 "data_size": 65536 00:12:28.409 }, 00:12:28.409 { 00:12:28.409 "name": "BaseBdev3", 00:12:28.409 "uuid": "0f1927e5-6df9-465b-b801-33f229efd33d", 00:12:28.409 "is_configured": true, 00:12:28.409 "data_offset": 0, 00:12:28.409 "data_size": 65536 00:12:28.409 }, 00:12:28.409 { 00:12:28.409 "name": "BaseBdev4", 00:12:28.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.409 "is_configured": false, 00:12:28.409 "data_offset": 0, 00:12:28.409 "data_size": 0 00:12:28.409 } 00:12:28.409 ] 00:12:28.409 }' 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.409 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.977 [2024-11-20 08:45:59.693180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:28.977 [2024-11-20 08:45:59.693317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:28.977 [2024-11-20 08:45:59.693332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:28.977 [2024-11-20 08:45:59.693698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:28.977 [2024-11-20 08:45:59.693936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:28.977 [2024-11-20 08:45:59.693959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:28.977 [2024-11-20 08:45:59.694302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.977 BaseBdev4 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.977 [ 00:12:28.977 { 00:12:28.977 "name": "BaseBdev4", 00:12:28.977 "aliases": [ 00:12:28.977 "15ba96f4-bd52-4ef2-bb4b-c650df98b332" 00:12:28.977 ], 00:12:28.977 "product_name": "Malloc disk", 00:12:28.977 "block_size": 512, 00:12:28.977 "num_blocks": 65536, 00:12:28.977 "uuid": "15ba96f4-bd52-4ef2-bb4b-c650df98b332", 00:12:28.977 "assigned_rate_limits": { 00:12:28.977 "rw_ios_per_sec": 0, 00:12:28.977 "rw_mbytes_per_sec": 0, 00:12:28.977 "r_mbytes_per_sec": 0, 00:12:28.977 "w_mbytes_per_sec": 0 00:12:28.977 }, 00:12:28.977 "claimed": true, 00:12:28.977 "claim_type": "exclusive_write", 00:12:28.977 "zoned": false, 00:12:28.977 "supported_io_types": { 00:12:28.977 "read": true, 00:12:28.977 "write": true, 00:12:28.977 "unmap": true, 00:12:28.977 "flush": true, 00:12:28.977 "reset": true, 00:12:28.977 "nvme_admin": false, 00:12:28.977 "nvme_io": false, 00:12:28.977 "nvme_io_md": false, 00:12:28.977 "write_zeroes": true, 00:12:28.977 "zcopy": true, 00:12:28.977 "get_zone_info": false, 00:12:28.977 "zone_management": false, 00:12:28.977 "zone_append": false, 00:12:28.977 "compare": false, 00:12:28.977 "compare_and_write": false, 00:12:28.977 "abort": true, 00:12:28.977 "seek_hole": false, 00:12:28.977 "seek_data": false, 00:12:28.977 "copy": true, 00:12:28.977 "nvme_iov_md": false 00:12:28.977 }, 00:12:28.977 "memory_domains": [ 00:12:28.977 { 00:12:28.977 "dma_device_id": "system", 00:12:28.977 "dma_device_type": 1 00:12:28.977 }, 00:12:28.977 { 00:12:28.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.977 "dma_device_type": 2 00:12:28.977 } 00:12:28.977 ], 00:12:28.977 "driver_specific": {} 00:12:28.977 } 00:12:28.977 ] 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.977 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.978 "name": "Existed_Raid", 00:12:28.978 "uuid": "29f7fae2-2bc4-4ea3-9ec4-9519411228fc", 00:12:28.978 "strip_size_kb": 64, 00:12:28.978 "state": "online", 00:12:28.978 "raid_level": "raid0", 00:12:28.978 "superblock": false, 00:12:28.978 "num_base_bdevs": 4, 00:12:28.978 "num_base_bdevs_discovered": 4, 00:12:28.978 "num_base_bdevs_operational": 4, 00:12:28.978 "base_bdevs_list": [ 00:12:28.978 { 00:12:28.978 "name": "BaseBdev1", 00:12:28.978 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:28.978 "is_configured": true, 00:12:28.978 "data_offset": 0, 00:12:28.978 "data_size": 65536 00:12:28.978 }, 00:12:28.978 { 00:12:28.978 "name": "BaseBdev2", 00:12:28.978 "uuid": "30cdaf03-0b43-406e-a6e5-586f9f2e2945", 00:12:28.978 "is_configured": true, 00:12:28.978 "data_offset": 0, 00:12:28.978 "data_size": 65536 00:12:28.978 }, 00:12:28.978 { 00:12:28.978 "name": "BaseBdev3", 00:12:28.978 "uuid": "0f1927e5-6df9-465b-b801-33f229efd33d", 00:12:28.978 "is_configured": true, 00:12:28.978 "data_offset": 0, 00:12:28.978 "data_size": 65536 00:12:28.978 }, 00:12:28.978 { 00:12:28.978 "name": "BaseBdev4", 00:12:28.978 "uuid": "15ba96f4-bd52-4ef2-bb4b-c650df98b332", 00:12:28.978 "is_configured": true, 00:12:28.978 "data_offset": 0, 00:12:28.978 "data_size": 65536 00:12:28.978 } 00:12:28.978 ] 00:12:28.978 }' 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.978 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.546 [2024-11-20 08:46:00.253910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.546 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.546 "name": "Existed_Raid", 00:12:29.546 "aliases": [ 00:12:29.546 "29f7fae2-2bc4-4ea3-9ec4-9519411228fc" 00:12:29.546 ], 00:12:29.546 "product_name": "Raid Volume", 00:12:29.546 "block_size": 512, 00:12:29.546 "num_blocks": 262144, 00:12:29.546 "uuid": "29f7fae2-2bc4-4ea3-9ec4-9519411228fc", 00:12:29.546 "assigned_rate_limits": { 00:12:29.546 "rw_ios_per_sec": 0, 00:12:29.546 "rw_mbytes_per_sec": 0, 00:12:29.546 "r_mbytes_per_sec": 0, 00:12:29.546 "w_mbytes_per_sec": 0 00:12:29.546 }, 00:12:29.546 "claimed": false, 00:12:29.546 "zoned": false, 00:12:29.546 "supported_io_types": { 00:12:29.546 "read": true, 00:12:29.546 "write": true, 00:12:29.546 "unmap": true, 00:12:29.546 "flush": true, 00:12:29.546 "reset": true, 00:12:29.546 "nvme_admin": false, 00:12:29.546 "nvme_io": false, 00:12:29.546 "nvme_io_md": false, 00:12:29.546 "write_zeroes": true, 00:12:29.546 "zcopy": false, 00:12:29.546 "get_zone_info": false, 00:12:29.546 "zone_management": false, 00:12:29.546 "zone_append": false, 00:12:29.546 "compare": false, 00:12:29.546 "compare_and_write": false, 00:12:29.546 "abort": false, 00:12:29.546 "seek_hole": false, 00:12:29.546 "seek_data": false, 00:12:29.546 "copy": false, 00:12:29.546 "nvme_iov_md": false 00:12:29.546 }, 00:12:29.546 "memory_domains": [ 00:12:29.546 { 00:12:29.546 "dma_device_id": "system", 00:12:29.546 "dma_device_type": 1 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.546 "dma_device_type": 2 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "system", 00:12:29.546 "dma_device_type": 1 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.546 "dma_device_type": 2 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "system", 00:12:29.546 "dma_device_type": 1 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.546 "dma_device_type": 2 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "system", 00:12:29.546 "dma_device_type": 1 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.546 "dma_device_type": 2 00:12:29.546 } 00:12:29.546 ], 00:12:29.546 "driver_specific": { 00:12:29.546 "raid": { 00:12:29.546 "uuid": "29f7fae2-2bc4-4ea3-9ec4-9519411228fc", 00:12:29.546 "strip_size_kb": 64, 00:12:29.546 "state": "online", 00:12:29.546 "raid_level": "raid0", 00:12:29.546 "superblock": false, 00:12:29.546 "num_base_bdevs": 4, 00:12:29.546 "num_base_bdevs_discovered": 4, 00:12:29.546 "num_base_bdevs_operational": 4, 00:12:29.546 "base_bdevs_list": [ 00:12:29.546 { 00:12:29.546 "name": "BaseBdev1", 00:12:29.546 "uuid": "522e941f-eee4-4078-8bf0-c5f83f3254d6", 00:12:29.546 "is_configured": true, 00:12:29.546 "data_offset": 0, 00:12:29.546 "data_size": 65536 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "name": "BaseBdev2", 00:12:29.546 "uuid": "30cdaf03-0b43-406e-a6e5-586f9f2e2945", 00:12:29.546 "is_configured": true, 00:12:29.546 "data_offset": 0, 00:12:29.546 "data_size": 65536 00:12:29.546 }, 00:12:29.546 { 00:12:29.546 "name": "BaseBdev3", 00:12:29.546 "uuid": "0f1927e5-6df9-465b-b801-33f229efd33d", 00:12:29.546 "is_configured": true, 00:12:29.546 "data_offset": 0, 00:12:29.547 "data_size": 65536 00:12:29.547 }, 00:12:29.547 { 00:12:29.547 "name": "BaseBdev4", 00:12:29.547 "uuid": "15ba96f4-bd52-4ef2-bb4b-c650df98b332", 00:12:29.547 "is_configured": true, 00:12:29.547 "data_offset": 0, 00:12:29.547 "data_size": 65536 00:12:29.547 } 00:12:29.547 ] 00:12:29.547 } 00:12:29.547 } 00:12:29.547 }' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:29.547 BaseBdev2 00:12:29.547 BaseBdev3 00:12:29.547 BaseBdev4' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.547 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.806 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.806 [2024-11-20 08:46:00.633675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.807 [2024-11-20 08:46:00.633717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.807 [2024-11-20 08:46:00.633786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.067 "name": "Existed_Raid", 00:12:30.067 "uuid": "29f7fae2-2bc4-4ea3-9ec4-9519411228fc", 00:12:30.067 "strip_size_kb": 64, 00:12:30.067 "state": "offline", 00:12:30.067 "raid_level": "raid0", 00:12:30.067 "superblock": false, 00:12:30.067 "num_base_bdevs": 4, 00:12:30.067 "num_base_bdevs_discovered": 3, 00:12:30.067 "num_base_bdevs_operational": 3, 00:12:30.067 "base_bdevs_list": [ 00:12:30.067 { 00:12:30.067 "name": null, 00:12:30.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.067 "is_configured": false, 00:12:30.067 "data_offset": 0, 00:12:30.067 "data_size": 65536 00:12:30.067 }, 00:12:30.067 { 00:12:30.067 "name": "BaseBdev2", 00:12:30.067 "uuid": "30cdaf03-0b43-406e-a6e5-586f9f2e2945", 00:12:30.067 "is_configured": true, 00:12:30.067 "data_offset": 0, 00:12:30.067 "data_size": 65536 00:12:30.067 }, 00:12:30.067 { 00:12:30.067 "name": "BaseBdev3", 00:12:30.067 "uuid": "0f1927e5-6df9-465b-b801-33f229efd33d", 00:12:30.067 "is_configured": true, 00:12:30.067 "data_offset": 0, 00:12:30.067 "data_size": 65536 00:12:30.067 }, 00:12:30.067 { 00:12:30.067 "name": "BaseBdev4", 00:12:30.067 "uuid": "15ba96f4-bd52-4ef2-bb4b-c650df98b332", 00:12:30.067 "is_configured": true, 00:12:30.067 "data_offset": 0, 00:12:30.067 "data_size": 65536 00:12:30.067 } 00:12:30.067 ] 00:12:30.067 }' 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.067 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.635 [2024-11-20 08:46:01.298573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.635 [2024-11-20 08:46:01.449618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.635 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 [2024-11-20 08:46:01.601959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:30.894 [2024-11-20 08:46:01.602021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 BaseBdev2 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.894 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 [ 00:12:30.894 { 00:12:30.894 "name": "BaseBdev2", 00:12:30.894 "aliases": [ 00:12:30.894 "3caad55a-38a3-48b2-b110-2fce7f249847" 00:12:30.894 ], 00:12:30.894 "product_name": "Malloc disk", 00:12:30.894 "block_size": 512, 00:12:30.894 "num_blocks": 65536, 00:12:30.894 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:30.894 "assigned_rate_limits": { 00:12:30.894 "rw_ios_per_sec": 0, 00:12:30.894 "rw_mbytes_per_sec": 0, 00:12:30.894 "r_mbytes_per_sec": 0, 00:12:30.894 "w_mbytes_per_sec": 0 00:12:30.894 }, 00:12:30.894 "claimed": false, 00:12:30.894 "zoned": false, 00:12:30.894 "supported_io_types": { 00:12:30.894 "read": true, 00:12:30.894 "write": true, 00:12:30.894 "unmap": true, 00:12:30.894 "flush": true, 00:12:30.894 "reset": true, 00:12:30.894 "nvme_admin": false, 00:12:30.894 "nvme_io": false, 00:12:30.894 "nvme_io_md": false, 00:12:30.894 "write_zeroes": true, 00:12:30.894 "zcopy": true, 00:12:30.894 "get_zone_info": false, 00:12:31.153 "zone_management": false, 00:12:31.153 "zone_append": false, 00:12:31.153 "compare": false, 00:12:31.153 "compare_and_write": false, 00:12:31.153 "abort": true, 00:12:31.153 "seek_hole": false, 00:12:31.153 "seek_data": false, 00:12:31.153 "copy": true, 00:12:31.153 "nvme_iov_md": false 00:12:31.153 }, 00:12:31.153 "memory_domains": [ 00:12:31.153 { 00:12:31.153 "dma_device_id": "system", 00:12:31.153 "dma_device_type": 1 00:12:31.153 }, 00:12:31.153 { 00:12:31.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.153 "dma_device_type": 2 00:12:31.153 } 00:12:31.153 ], 00:12:31.153 "driver_specific": {} 00:12:31.153 } 00:12:31.153 ] 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.153 BaseBdev3 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.153 [ 00:12:31.153 { 00:12:31.153 "name": "BaseBdev3", 00:12:31.153 "aliases": [ 00:12:31.153 "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60" 00:12:31.153 ], 00:12:31.153 "product_name": "Malloc disk", 00:12:31.153 "block_size": 512, 00:12:31.153 "num_blocks": 65536, 00:12:31.153 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:31.153 "assigned_rate_limits": { 00:12:31.153 "rw_ios_per_sec": 0, 00:12:31.153 "rw_mbytes_per_sec": 0, 00:12:31.153 "r_mbytes_per_sec": 0, 00:12:31.153 "w_mbytes_per_sec": 0 00:12:31.153 }, 00:12:31.153 "claimed": false, 00:12:31.153 "zoned": false, 00:12:31.153 "supported_io_types": { 00:12:31.153 "read": true, 00:12:31.153 "write": true, 00:12:31.153 "unmap": true, 00:12:31.153 "flush": true, 00:12:31.153 "reset": true, 00:12:31.153 "nvme_admin": false, 00:12:31.153 "nvme_io": false, 00:12:31.153 "nvme_io_md": false, 00:12:31.153 "write_zeroes": true, 00:12:31.153 "zcopy": true, 00:12:31.153 "get_zone_info": false, 00:12:31.153 "zone_management": false, 00:12:31.153 "zone_append": false, 00:12:31.153 "compare": false, 00:12:31.153 "compare_and_write": false, 00:12:31.153 "abort": true, 00:12:31.153 "seek_hole": false, 00:12:31.153 "seek_data": false, 00:12:31.153 "copy": true, 00:12:31.153 "nvme_iov_md": false 00:12:31.153 }, 00:12:31.153 "memory_domains": [ 00:12:31.153 { 00:12:31.153 "dma_device_id": "system", 00:12:31.153 "dma_device_type": 1 00:12:31.153 }, 00:12:31.153 { 00:12:31.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.153 "dma_device_type": 2 00:12:31.153 } 00:12:31.153 ], 00:12:31.153 "driver_specific": {} 00:12:31.153 } 00:12:31.153 ] 00:12:31.153 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 BaseBdev4 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 [ 00:12:31.154 { 00:12:31.154 "name": "BaseBdev4", 00:12:31.154 "aliases": [ 00:12:31.154 "699e4225-0431-4ad3-8397-efb4de9639eb" 00:12:31.154 ], 00:12:31.154 "product_name": "Malloc disk", 00:12:31.154 "block_size": 512, 00:12:31.154 "num_blocks": 65536, 00:12:31.154 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:31.154 "assigned_rate_limits": { 00:12:31.154 "rw_ios_per_sec": 0, 00:12:31.154 "rw_mbytes_per_sec": 0, 00:12:31.154 "r_mbytes_per_sec": 0, 00:12:31.154 "w_mbytes_per_sec": 0 00:12:31.154 }, 00:12:31.154 "claimed": false, 00:12:31.154 "zoned": false, 00:12:31.154 "supported_io_types": { 00:12:31.154 "read": true, 00:12:31.154 "write": true, 00:12:31.154 "unmap": true, 00:12:31.154 "flush": true, 00:12:31.154 "reset": true, 00:12:31.154 "nvme_admin": false, 00:12:31.154 "nvme_io": false, 00:12:31.154 "nvme_io_md": false, 00:12:31.154 "write_zeroes": true, 00:12:31.154 "zcopy": true, 00:12:31.154 "get_zone_info": false, 00:12:31.154 "zone_management": false, 00:12:31.154 "zone_append": false, 00:12:31.154 "compare": false, 00:12:31.154 "compare_and_write": false, 00:12:31.154 "abort": true, 00:12:31.154 "seek_hole": false, 00:12:31.154 "seek_data": false, 00:12:31.154 "copy": true, 00:12:31.154 "nvme_iov_md": false 00:12:31.154 }, 00:12:31.154 "memory_domains": [ 00:12:31.154 { 00:12:31.154 "dma_device_id": "system", 00:12:31.154 "dma_device_type": 1 00:12:31.154 }, 00:12:31.154 { 00:12:31.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.154 "dma_device_type": 2 00:12:31.154 } 00:12:31.154 ], 00:12:31.154 "driver_specific": {} 00:12:31.154 } 00:12:31.154 ] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 [2024-11-20 08:46:01.972235] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.154 [2024-11-20 08:46:01.972434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.154 [2024-11-20 08:46:01.972580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.154 [2024-11-20 08:46:01.975263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.154 [2024-11-20 08:46:01.975469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.154 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.154 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.154 "name": "Existed_Raid", 00:12:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.154 "strip_size_kb": 64, 00:12:31.154 "state": "configuring", 00:12:31.154 "raid_level": "raid0", 00:12:31.154 "superblock": false, 00:12:31.154 "num_base_bdevs": 4, 00:12:31.154 "num_base_bdevs_discovered": 3, 00:12:31.154 "num_base_bdevs_operational": 4, 00:12:31.154 "base_bdevs_list": [ 00:12:31.154 { 00:12:31.154 "name": "BaseBdev1", 00:12:31.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.154 "is_configured": false, 00:12:31.154 "data_offset": 0, 00:12:31.154 "data_size": 0 00:12:31.154 }, 00:12:31.154 { 00:12:31.154 "name": "BaseBdev2", 00:12:31.154 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:31.154 "is_configured": true, 00:12:31.154 "data_offset": 0, 00:12:31.154 "data_size": 65536 00:12:31.154 }, 00:12:31.154 { 00:12:31.154 "name": "BaseBdev3", 00:12:31.154 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:31.154 "is_configured": true, 00:12:31.154 "data_offset": 0, 00:12:31.154 "data_size": 65536 00:12:31.154 }, 00:12:31.154 { 00:12:31.154 "name": "BaseBdev4", 00:12:31.154 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:31.154 "is_configured": true, 00:12:31.154 "data_offset": 0, 00:12:31.154 "data_size": 65536 00:12:31.154 } 00:12:31.154 ] 00:12:31.154 }' 00:12:31.154 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.154 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.722 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:31.722 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.722 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.722 [2024-11-20 08:46:02.508452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.723 "name": "Existed_Raid", 00:12:31.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.723 "strip_size_kb": 64, 00:12:31.723 "state": "configuring", 00:12:31.723 "raid_level": "raid0", 00:12:31.723 "superblock": false, 00:12:31.723 "num_base_bdevs": 4, 00:12:31.723 "num_base_bdevs_discovered": 2, 00:12:31.723 "num_base_bdevs_operational": 4, 00:12:31.723 "base_bdevs_list": [ 00:12:31.723 { 00:12:31.723 "name": "BaseBdev1", 00:12:31.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.723 "is_configured": false, 00:12:31.723 "data_offset": 0, 00:12:31.723 "data_size": 0 00:12:31.723 }, 00:12:31.723 { 00:12:31.723 "name": null, 00:12:31.723 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:31.723 "is_configured": false, 00:12:31.723 "data_offset": 0, 00:12:31.723 "data_size": 65536 00:12:31.723 }, 00:12:31.723 { 00:12:31.723 "name": "BaseBdev3", 00:12:31.723 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:31.723 "is_configured": true, 00:12:31.723 "data_offset": 0, 00:12:31.723 "data_size": 65536 00:12:31.723 }, 00:12:31.723 { 00:12:31.723 "name": "BaseBdev4", 00:12:31.723 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:31.723 "is_configured": true, 00:12:31.723 "data_offset": 0, 00:12:31.723 "data_size": 65536 00:12:31.723 } 00:12:31.723 ] 00:12:31.723 }' 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.723 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 [2024-11-20 08:46:03.140237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.291 BaseBdev1 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.291 [ 00:12:32.291 { 00:12:32.291 "name": "BaseBdev1", 00:12:32.291 "aliases": [ 00:12:32.291 "3b21237b-2354-4602-8a4a-3d4532f044c8" 00:12:32.291 ], 00:12:32.291 "product_name": "Malloc disk", 00:12:32.291 "block_size": 512, 00:12:32.291 "num_blocks": 65536, 00:12:32.291 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:32.291 "assigned_rate_limits": { 00:12:32.291 "rw_ios_per_sec": 0, 00:12:32.291 "rw_mbytes_per_sec": 0, 00:12:32.291 "r_mbytes_per_sec": 0, 00:12:32.291 "w_mbytes_per_sec": 0 00:12:32.291 }, 00:12:32.291 "claimed": true, 00:12:32.291 "claim_type": "exclusive_write", 00:12:32.291 "zoned": false, 00:12:32.291 "supported_io_types": { 00:12:32.291 "read": true, 00:12:32.291 "write": true, 00:12:32.291 "unmap": true, 00:12:32.291 "flush": true, 00:12:32.291 "reset": true, 00:12:32.291 "nvme_admin": false, 00:12:32.291 "nvme_io": false, 00:12:32.291 "nvme_io_md": false, 00:12:32.291 "write_zeroes": true, 00:12:32.291 "zcopy": true, 00:12:32.291 "get_zone_info": false, 00:12:32.291 "zone_management": false, 00:12:32.291 "zone_append": false, 00:12:32.291 "compare": false, 00:12:32.291 "compare_and_write": false, 00:12:32.291 "abort": true, 00:12:32.291 "seek_hole": false, 00:12:32.291 "seek_data": false, 00:12:32.291 "copy": true, 00:12:32.291 "nvme_iov_md": false 00:12:32.291 }, 00:12:32.291 "memory_domains": [ 00:12:32.291 { 00:12:32.291 "dma_device_id": "system", 00:12:32.291 "dma_device_type": 1 00:12:32.291 }, 00:12:32.291 { 00:12:32.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.291 "dma_device_type": 2 00:12:32.291 } 00:12:32.291 ], 00:12:32.291 "driver_specific": {} 00:12:32.291 } 00:12:32.291 ] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.291 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.292 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.292 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.550 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.550 "name": "Existed_Raid", 00:12:32.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.550 "strip_size_kb": 64, 00:12:32.550 "state": "configuring", 00:12:32.550 "raid_level": "raid0", 00:12:32.550 "superblock": false, 00:12:32.550 "num_base_bdevs": 4, 00:12:32.550 "num_base_bdevs_discovered": 3, 00:12:32.550 "num_base_bdevs_operational": 4, 00:12:32.550 "base_bdevs_list": [ 00:12:32.550 { 00:12:32.550 "name": "BaseBdev1", 00:12:32.550 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:32.550 "is_configured": true, 00:12:32.550 "data_offset": 0, 00:12:32.550 "data_size": 65536 00:12:32.550 }, 00:12:32.550 { 00:12:32.550 "name": null, 00:12:32.551 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:32.551 "is_configured": false, 00:12:32.551 "data_offset": 0, 00:12:32.551 "data_size": 65536 00:12:32.551 }, 00:12:32.551 { 00:12:32.551 "name": "BaseBdev3", 00:12:32.551 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:32.551 "is_configured": true, 00:12:32.551 "data_offset": 0, 00:12:32.551 "data_size": 65536 00:12:32.551 }, 00:12:32.551 { 00:12:32.551 "name": "BaseBdev4", 00:12:32.551 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:32.551 "is_configured": true, 00:12:32.551 "data_offset": 0, 00:12:32.551 "data_size": 65536 00:12:32.551 } 00:12:32.551 ] 00:12:32.551 }' 00:12:32.551 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.551 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.118 [2024-11-20 08:46:03.800551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.118 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.118 "name": "Existed_Raid", 00:12:33.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.118 "strip_size_kb": 64, 00:12:33.118 "state": "configuring", 00:12:33.118 "raid_level": "raid0", 00:12:33.118 "superblock": false, 00:12:33.118 "num_base_bdevs": 4, 00:12:33.118 "num_base_bdevs_discovered": 2, 00:12:33.118 "num_base_bdevs_operational": 4, 00:12:33.118 "base_bdevs_list": [ 00:12:33.118 { 00:12:33.118 "name": "BaseBdev1", 00:12:33.118 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:33.118 "is_configured": true, 00:12:33.118 "data_offset": 0, 00:12:33.118 "data_size": 65536 00:12:33.118 }, 00:12:33.118 { 00:12:33.118 "name": null, 00:12:33.118 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:33.118 "is_configured": false, 00:12:33.118 "data_offset": 0, 00:12:33.118 "data_size": 65536 00:12:33.118 }, 00:12:33.118 { 00:12:33.118 "name": null, 00:12:33.118 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:33.118 "is_configured": false, 00:12:33.118 "data_offset": 0, 00:12:33.118 "data_size": 65536 00:12:33.118 }, 00:12:33.118 { 00:12:33.119 "name": "BaseBdev4", 00:12:33.119 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:33.119 "is_configured": true, 00:12:33.119 "data_offset": 0, 00:12:33.119 "data_size": 65536 00:12:33.119 } 00:12:33.119 ] 00:12:33.119 }' 00:12:33.119 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.119 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.684 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.685 [2024-11-20 08:46:04.368751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.685 "name": "Existed_Raid", 00:12:33.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.685 "strip_size_kb": 64, 00:12:33.685 "state": "configuring", 00:12:33.685 "raid_level": "raid0", 00:12:33.685 "superblock": false, 00:12:33.685 "num_base_bdevs": 4, 00:12:33.685 "num_base_bdevs_discovered": 3, 00:12:33.685 "num_base_bdevs_operational": 4, 00:12:33.685 "base_bdevs_list": [ 00:12:33.685 { 00:12:33.685 "name": "BaseBdev1", 00:12:33.685 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:33.685 "is_configured": true, 00:12:33.685 "data_offset": 0, 00:12:33.685 "data_size": 65536 00:12:33.685 }, 00:12:33.685 { 00:12:33.685 "name": null, 00:12:33.685 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:33.685 "is_configured": false, 00:12:33.685 "data_offset": 0, 00:12:33.685 "data_size": 65536 00:12:33.685 }, 00:12:33.685 { 00:12:33.685 "name": "BaseBdev3", 00:12:33.685 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:33.685 "is_configured": true, 00:12:33.685 "data_offset": 0, 00:12:33.685 "data_size": 65536 00:12:33.685 }, 00:12:33.685 { 00:12:33.685 "name": "BaseBdev4", 00:12:33.685 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:33.685 "is_configured": true, 00:12:33.685 "data_offset": 0, 00:12:33.685 "data_size": 65536 00:12:33.685 } 00:12:33.685 ] 00:12:33.685 }' 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.685 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.251 08:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.251 [2024-11-20 08:46:04.924994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.251 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.252 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.252 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.252 "name": "Existed_Raid", 00:12:34.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.252 "strip_size_kb": 64, 00:12:34.252 "state": "configuring", 00:12:34.252 "raid_level": "raid0", 00:12:34.252 "superblock": false, 00:12:34.252 "num_base_bdevs": 4, 00:12:34.252 "num_base_bdevs_discovered": 2, 00:12:34.252 "num_base_bdevs_operational": 4, 00:12:34.252 "base_bdevs_list": [ 00:12:34.252 { 00:12:34.252 "name": null, 00:12:34.252 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:34.252 "is_configured": false, 00:12:34.252 "data_offset": 0, 00:12:34.252 "data_size": 65536 00:12:34.252 }, 00:12:34.252 { 00:12:34.252 "name": null, 00:12:34.252 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:34.252 "is_configured": false, 00:12:34.252 "data_offset": 0, 00:12:34.252 "data_size": 65536 00:12:34.252 }, 00:12:34.252 { 00:12:34.252 "name": "BaseBdev3", 00:12:34.252 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:34.252 "is_configured": true, 00:12:34.252 "data_offset": 0, 00:12:34.252 "data_size": 65536 00:12:34.252 }, 00:12:34.252 { 00:12:34.252 "name": "BaseBdev4", 00:12:34.252 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:34.252 "is_configured": true, 00:12:34.252 "data_offset": 0, 00:12:34.252 "data_size": 65536 00:12:34.252 } 00:12:34.252 ] 00:12:34.252 }' 00:12:34.252 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.252 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.817 [2024-11-20 08:46:05.579935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.817 "name": "Existed_Raid", 00:12:34.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.817 "strip_size_kb": 64, 00:12:34.817 "state": "configuring", 00:12:34.817 "raid_level": "raid0", 00:12:34.817 "superblock": false, 00:12:34.817 "num_base_bdevs": 4, 00:12:34.817 "num_base_bdevs_discovered": 3, 00:12:34.817 "num_base_bdevs_operational": 4, 00:12:34.817 "base_bdevs_list": [ 00:12:34.817 { 00:12:34.817 "name": null, 00:12:34.817 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:34.817 "is_configured": false, 00:12:34.817 "data_offset": 0, 00:12:34.817 "data_size": 65536 00:12:34.817 }, 00:12:34.817 { 00:12:34.817 "name": "BaseBdev2", 00:12:34.817 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:34.817 "is_configured": true, 00:12:34.817 "data_offset": 0, 00:12:34.817 "data_size": 65536 00:12:34.817 }, 00:12:34.817 { 00:12:34.817 "name": "BaseBdev3", 00:12:34.817 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:34.817 "is_configured": true, 00:12:34.817 "data_offset": 0, 00:12:34.817 "data_size": 65536 00:12:34.817 }, 00:12:34.817 { 00:12:34.817 "name": "BaseBdev4", 00:12:34.817 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:34.817 "is_configured": true, 00:12:34.817 "data_offset": 0, 00:12:34.817 "data_size": 65536 00:12:34.817 } 00:12:34.817 ] 00:12:34.817 }' 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.817 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b21237b-2354-4602-8a4a-3d4532f044c8 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.382 [2024-11-20 08:46:06.251094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:35.382 [2024-11-20 08:46:06.251177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:35.382 [2024-11-20 08:46:06.251192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:35.382 [2024-11-20 08:46:06.251531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:35.382 [2024-11-20 08:46:06.251755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:35.382 [2024-11-20 08:46:06.251778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:35.382 [2024-11-20 08:46:06.252071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.382 NewBaseBdev 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.382 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.383 [ 00:12:35.383 { 00:12:35.383 "name": "NewBaseBdev", 00:12:35.383 "aliases": [ 00:12:35.383 "3b21237b-2354-4602-8a4a-3d4532f044c8" 00:12:35.383 ], 00:12:35.383 "product_name": "Malloc disk", 00:12:35.383 "block_size": 512, 00:12:35.383 "num_blocks": 65536, 00:12:35.383 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:35.383 "assigned_rate_limits": { 00:12:35.383 "rw_ios_per_sec": 0, 00:12:35.383 "rw_mbytes_per_sec": 0, 00:12:35.383 "r_mbytes_per_sec": 0, 00:12:35.383 "w_mbytes_per_sec": 0 00:12:35.383 }, 00:12:35.383 "claimed": true, 00:12:35.383 "claim_type": "exclusive_write", 00:12:35.383 "zoned": false, 00:12:35.383 "supported_io_types": { 00:12:35.383 "read": true, 00:12:35.383 "write": true, 00:12:35.383 "unmap": true, 00:12:35.383 "flush": true, 00:12:35.383 "reset": true, 00:12:35.383 "nvme_admin": false, 00:12:35.383 "nvme_io": false, 00:12:35.383 "nvme_io_md": false, 00:12:35.383 "write_zeroes": true, 00:12:35.383 "zcopy": true, 00:12:35.383 "get_zone_info": false, 00:12:35.383 "zone_management": false, 00:12:35.383 "zone_append": false, 00:12:35.383 "compare": false, 00:12:35.383 "compare_and_write": false, 00:12:35.383 "abort": true, 00:12:35.383 "seek_hole": false, 00:12:35.383 "seek_data": false, 00:12:35.383 "copy": true, 00:12:35.383 "nvme_iov_md": false 00:12:35.383 }, 00:12:35.383 "memory_domains": [ 00:12:35.383 { 00:12:35.383 "dma_device_id": "system", 00:12:35.383 "dma_device_type": 1 00:12:35.383 }, 00:12:35.383 { 00:12:35.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.383 "dma_device_type": 2 00:12:35.383 } 00:12:35.383 ], 00:12:35.383 "driver_specific": {} 00:12:35.383 } 00:12:35.383 ] 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.383 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.641 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.641 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.641 "name": "Existed_Raid", 00:12:35.641 "uuid": "6cd97cda-e697-4028-a9db-eb213310e08b", 00:12:35.641 "strip_size_kb": 64, 00:12:35.641 "state": "online", 00:12:35.641 "raid_level": "raid0", 00:12:35.641 "superblock": false, 00:12:35.641 "num_base_bdevs": 4, 00:12:35.641 "num_base_bdevs_discovered": 4, 00:12:35.641 "num_base_bdevs_operational": 4, 00:12:35.641 "base_bdevs_list": [ 00:12:35.641 { 00:12:35.641 "name": "NewBaseBdev", 00:12:35.641 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:35.641 "is_configured": true, 00:12:35.641 "data_offset": 0, 00:12:35.641 "data_size": 65536 00:12:35.641 }, 00:12:35.641 { 00:12:35.641 "name": "BaseBdev2", 00:12:35.641 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:35.641 "is_configured": true, 00:12:35.641 "data_offset": 0, 00:12:35.641 "data_size": 65536 00:12:35.641 }, 00:12:35.641 { 00:12:35.641 "name": "BaseBdev3", 00:12:35.641 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:35.641 "is_configured": true, 00:12:35.641 "data_offset": 0, 00:12:35.641 "data_size": 65536 00:12:35.642 }, 00:12:35.642 { 00:12:35.642 "name": "BaseBdev4", 00:12:35.642 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:35.642 "is_configured": true, 00:12:35.642 "data_offset": 0, 00:12:35.642 "data_size": 65536 00:12:35.642 } 00:12:35.642 ] 00:12:35.642 }' 00:12:35.642 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.642 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.208 [2024-11-20 08:46:06.823833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.208 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.208 "name": "Existed_Raid", 00:12:36.208 "aliases": [ 00:12:36.208 "6cd97cda-e697-4028-a9db-eb213310e08b" 00:12:36.208 ], 00:12:36.208 "product_name": "Raid Volume", 00:12:36.208 "block_size": 512, 00:12:36.208 "num_blocks": 262144, 00:12:36.208 "uuid": "6cd97cda-e697-4028-a9db-eb213310e08b", 00:12:36.208 "assigned_rate_limits": { 00:12:36.208 "rw_ios_per_sec": 0, 00:12:36.208 "rw_mbytes_per_sec": 0, 00:12:36.208 "r_mbytes_per_sec": 0, 00:12:36.208 "w_mbytes_per_sec": 0 00:12:36.208 }, 00:12:36.208 "claimed": false, 00:12:36.208 "zoned": false, 00:12:36.208 "supported_io_types": { 00:12:36.208 "read": true, 00:12:36.208 "write": true, 00:12:36.208 "unmap": true, 00:12:36.208 "flush": true, 00:12:36.208 "reset": true, 00:12:36.208 "nvme_admin": false, 00:12:36.208 "nvme_io": false, 00:12:36.208 "nvme_io_md": false, 00:12:36.208 "write_zeroes": true, 00:12:36.208 "zcopy": false, 00:12:36.208 "get_zone_info": false, 00:12:36.208 "zone_management": false, 00:12:36.208 "zone_append": false, 00:12:36.208 "compare": false, 00:12:36.208 "compare_and_write": false, 00:12:36.208 "abort": false, 00:12:36.208 "seek_hole": false, 00:12:36.208 "seek_data": false, 00:12:36.208 "copy": false, 00:12:36.208 "nvme_iov_md": false 00:12:36.208 }, 00:12:36.208 "memory_domains": [ 00:12:36.208 { 00:12:36.208 "dma_device_id": "system", 00:12:36.208 "dma_device_type": 1 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.208 "dma_device_type": 2 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "system", 00:12:36.208 "dma_device_type": 1 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.208 "dma_device_type": 2 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "system", 00:12:36.208 "dma_device_type": 1 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.208 "dma_device_type": 2 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "system", 00:12:36.208 "dma_device_type": 1 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.208 "dma_device_type": 2 00:12:36.208 } 00:12:36.208 ], 00:12:36.208 "driver_specific": { 00:12:36.208 "raid": { 00:12:36.208 "uuid": "6cd97cda-e697-4028-a9db-eb213310e08b", 00:12:36.208 "strip_size_kb": 64, 00:12:36.208 "state": "online", 00:12:36.208 "raid_level": "raid0", 00:12:36.208 "superblock": false, 00:12:36.208 "num_base_bdevs": 4, 00:12:36.208 "num_base_bdevs_discovered": 4, 00:12:36.208 "num_base_bdevs_operational": 4, 00:12:36.208 "base_bdevs_list": [ 00:12:36.208 { 00:12:36.208 "name": "NewBaseBdev", 00:12:36.208 "uuid": "3b21237b-2354-4602-8a4a-3d4532f044c8", 00:12:36.208 "is_configured": true, 00:12:36.208 "data_offset": 0, 00:12:36.208 "data_size": 65536 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "name": "BaseBdev2", 00:12:36.208 "uuid": "3caad55a-38a3-48b2-b110-2fce7f249847", 00:12:36.208 "is_configured": true, 00:12:36.208 "data_offset": 0, 00:12:36.208 "data_size": 65536 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "name": "BaseBdev3", 00:12:36.208 "uuid": "a0fd8c5b-b068-4c7f-9f8e-6d2d9f60fa60", 00:12:36.208 "is_configured": true, 00:12:36.208 "data_offset": 0, 00:12:36.208 "data_size": 65536 00:12:36.208 }, 00:12:36.208 { 00:12:36.208 "name": "BaseBdev4", 00:12:36.208 "uuid": "699e4225-0431-4ad3-8397-efb4de9639eb", 00:12:36.209 "is_configured": true, 00:12:36.209 "data_offset": 0, 00:12:36.209 "data_size": 65536 00:12:36.209 } 00:12:36.209 ] 00:12:36.209 } 00:12:36.209 } 00:12:36.209 }' 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:36.209 BaseBdev2 00:12:36.209 BaseBdev3 00:12:36.209 BaseBdev4' 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.209 08:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.209 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.467 [2024-11-20 08:46:07.179470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.467 [2024-11-20 08:46:07.179513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.467 [2024-11-20 08:46:07.179637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.467 [2024-11-20 08:46:07.179742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.467 [2024-11-20 08:46:07.179759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69441 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69441 ']' 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69441 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69441 00:12:36.467 killing process with pid 69441 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69441' 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69441 00:12:36.467 [2024-11-20 08:46:07.221572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.467 08:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69441 00:12:36.725 [2024-11-20 08:46:07.587226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:38.099 00:12:38.099 real 0m12.949s 00:12:38.099 user 0m21.395s 00:12:38.099 sys 0m1.858s 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.099 ************************************ 00:12:38.099 END TEST raid_state_function_test 00:12:38.099 ************************************ 00:12:38.099 08:46:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:38.099 08:46:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:38.099 08:46:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.099 08:46:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.099 ************************************ 00:12:38.099 START TEST raid_state_function_test_sb 00:12:38.099 ************************************ 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70130 00:12:38.099 Process raid pid: 70130 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70130' 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70130 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70130 ']' 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.099 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.100 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.100 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.100 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.100 [2024-11-20 08:46:08.827788] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:38.100 [2024-11-20 08:46:08.827984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.100 [2024-11-20 08:46:09.011314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.358 [2024-11-20 08:46:09.146289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.617 [2024-11-20 08:46:09.358013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.617 [2024-11-20 08:46:09.358063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.876 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.876 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:38.876 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:38.876 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.876 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.876 [2024-11-20 08:46:09.789859] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.876 [2024-11-20 08:46:09.789930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.876 [2024-11-20 08:46:09.789948] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.876 [2024-11-20 08:46:09.789965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.876 [2024-11-20 08:46:09.789976] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.134 [2024-11-20 08:46:09.789992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.134 [2024-11-20 08:46:09.790002] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:39.134 [2024-11-20 08:46:09.790016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.134 "name": "Existed_Raid", 00:12:39.134 "uuid": "8e1cc1bd-948b-4fc5-852d-151d52a03b50", 00:12:39.134 "strip_size_kb": 64, 00:12:39.134 "state": "configuring", 00:12:39.134 "raid_level": "raid0", 00:12:39.134 "superblock": true, 00:12:39.134 "num_base_bdevs": 4, 00:12:39.134 "num_base_bdevs_discovered": 0, 00:12:39.134 "num_base_bdevs_operational": 4, 00:12:39.134 "base_bdevs_list": [ 00:12:39.134 { 00:12:39.134 "name": "BaseBdev1", 00:12:39.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.134 "is_configured": false, 00:12:39.134 "data_offset": 0, 00:12:39.134 "data_size": 0 00:12:39.134 }, 00:12:39.134 { 00:12:39.134 "name": "BaseBdev2", 00:12:39.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.134 "is_configured": false, 00:12:39.134 "data_offset": 0, 00:12:39.134 "data_size": 0 00:12:39.134 }, 00:12:39.134 { 00:12:39.134 "name": "BaseBdev3", 00:12:39.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.134 "is_configured": false, 00:12:39.134 "data_offset": 0, 00:12:39.134 "data_size": 0 00:12:39.134 }, 00:12:39.134 { 00:12:39.134 "name": "BaseBdev4", 00:12:39.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.134 "is_configured": false, 00:12:39.134 "data_offset": 0, 00:12:39.134 "data_size": 0 00:12:39.134 } 00:12:39.134 ] 00:12:39.134 }' 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.134 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.700 [2024-11-20 08:46:10.313932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.700 [2024-11-20 08:46:10.313980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.700 [2024-11-20 08:46:10.321926] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.700 [2024-11-20 08:46:10.322899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.700 [2024-11-20 08:46:10.322929] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.700 [2024-11-20 08:46:10.322949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.700 [2024-11-20 08:46:10.322960] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.700 [2024-11-20 08:46:10.322975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.700 [2024-11-20 08:46:10.322985] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:39.700 [2024-11-20 08:46:10.323000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.700 [2024-11-20 08:46:10.367468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.700 BaseBdev1 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.700 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.701 [ 00:12:39.701 { 00:12:39.701 "name": "BaseBdev1", 00:12:39.701 "aliases": [ 00:12:39.701 "de0d29af-804c-436b-af32-fd8a460719a1" 00:12:39.701 ], 00:12:39.701 "product_name": "Malloc disk", 00:12:39.701 "block_size": 512, 00:12:39.701 "num_blocks": 65536, 00:12:39.701 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:39.701 "assigned_rate_limits": { 00:12:39.701 "rw_ios_per_sec": 0, 00:12:39.701 "rw_mbytes_per_sec": 0, 00:12:39.701 "r_mbytes_per_sec": 0, 00:12:39.701 "w_mbytes_per_sec": 0 00:12:39.701 }, 00:12:39.701 "claimed": true, 00:12:39.701 "claim_type": "exclusive_write", 00:12:39.701 "zoned": false, 00:12:39.701 "supported_io_types": { 00:12:39.701 "read": true, 00:12:39.701 "write": true, 00:12:39.701 "unmap": true, 00:12:39.701 "flush": true, 00:12:39.701 "reset": true, 00:12:39.701 "nvme_admin": false, 00:12:39.701 "nvme_io": false, 00:12:39.701 "nvme_io_md": false, 00:12:39.701 "write_zeroes": true, 00:12:39.701 "zcopy": true, 00:12:39.701 "get_zone_info": false, 00:12:39.701 "zone_management": false, 00:12:39.701 "zone_append": false, 00:12:39.701 "compare": false, 00:12:39.701 "compare_and_write": false, 00:12:39.701 "abort": true, 00:12:39.701 "seek_hole": false, 00:12:39.701 "seek_data": false, 00:12:39.701 "copy": true, 00:12:39.701 "nvme_iov_md": false 00:12:39.701 }, 00:12:39.701 "memory_domains": [ 00:12:39.701 { 00:12:39.701 "dma_device_id": "system", 00:12:39.701 "dma_device_type": 1 00:12:39.701 }, 00:12:39.701 { 00:12:39.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.701 "dma_device_type": 2 00:12:39.701 } 00:12:39.701 ], 00:12:39.701 "driver_specific": {} 00:12:39.701 } 00:12:39.701 ] 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.701 "name": "Existed_Raid", 00:12:39.701 "uuid": "e5dd609a-4bf2-4ca9-8961-752635484c34", 00:12:39.701 "strip_size_kb": 64, 00:12:39.701 "state": "configuring", 00:12:39.701 "raid_level": "raid0", 00:12:39.701 "superblock": true, 00:12:39.701 "num_base_bdevs": 4, 00:12:39.701 "num_base_bdevs_discovered": 1, 00:12:39.701 "num_base_bdevs_operational": 4, 00:12:39.701 "base_bdevs_list": [ 00:12:39.701 { 00:12:39.701 "name": "BaseBdev1", 00:12:39.701 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:39.701 "is_configured": true, 00:12:39.701 "data_offset": 2048, 00:12:39.701 "data_size": 63488 00:12:39.701 }, 00:12:39.701 { 00:12:39.701 "name": "BaseBdev2", 00:12:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.701 "is_configured": false, 00:12:39.701 "data_offset": 0, 00:12:39.701 "data_size": 0 00:12:39.701 }, 00:12:39.701 { 00:12:39.701 "name": "BaseBdev3", 00:12:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.701 "is_configured": false, 00:12:39.701 "data_offset": 0, 00:12:39.701 "data_size": 0 00:12:39.701 }, 00:12:39.701 { 00:12:39.701 "name": "BaseBdev4", 00:12:39.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.701 "is_configured": false, 00:12:39.701 "data_offset": 0, 00:12:39.701 "data_size": 0 00:12:39.701 } 00:12:39.701 ] 00:12:39.701 }' 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.701 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.011 [2024-11-20 08:46:10.903906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.011 [2024-11-20 08:46:10.904021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.011 [2024-11-20 08:46:10.916046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.011 [2024-11-20 08:46:10.918918] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.011 [2024-11-20 08:46:10.919141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.011 [2024-11-20 08:46:10.919289] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.011 [2024-11-20 08:46:10.919356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.011 [2024-11-20 08:46:10.919468] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:40.011 [2024-11-20 08:46:10.919646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.011 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.012 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.270 "name": "Existed_Raid", 00:12:40.270 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:40.270 "strip_size_kb": 64, 00:12:40.270 "state": "configuring", 00:12:40.270 "raid_level": "raid0", 00:12:40.270 "superblock": true, 00:12:40.270 "num_base_bdevs": 4, 00:12:40.270 "num_base_bdevs_discovered": 1, 00:12:40.270 "num_base_bdevs_operational": 4, 00:12:40.270 "base_bdevs_list": [ 00:12:40.270 { 00:12:40.270 "name": "BaseBdev1", 00:12:40.270 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:40.270 "is_configured": true, 00:12:40.270 "data_offset": 2048, 00:12:40.270 "data_size": 63488 00:12:40.270 }, 00:12:40.270 { 00:12:40.270 "name": "BaseBdev2", 00:12:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.270 "is_configured": false, 00:12:40.270 "data_offset": 0, 00:12:40.270 "data_size": 0 00:12:40.270 }, 00:12:40.270 { 00:12:40.270 "name": "BaseBdev3", 00:12:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.270 "is_configured": false, 00:12:40.270 "data_offset": 0, 00:12:40.270 "data_size": 0 00:12:40.270 }, 00:12:40.270 { 00:12:40.270 "name": "BaseBdev4", 00:12:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.270 "is_configured": false, 00:12:40.270 "data_offset": 0, 00:12:40.270 "data_size": 0 00:12:40.270 } 00:12:40.270 ] 00:12:40.270 }' 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.270 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.529 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:40.529 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.529 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.787 [2024-11-20 08:46:11.478548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.787 BaseBdev2 00:12:40.787 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.787 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:40.787 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:40.787 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.787 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:40.787 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.788 [ 00:12:40.788 { 00:12:40.788 "name": "BaseBdev2", 00:12:40.788 "aliases": [ 00:12:40.788 "262967e3-c551-43d5-b4b1-6a6a3e29761d" 00:12:40.788 ], 00:12:40.788 "product_name": "Malloc disk", 00:12:40.788 "block_size": 512, 00:12:40.788 "num_blocks": 65536, 00:12:40.788 "uuid": "262967e3-c551-43d5-b4b1-6a6a3e29761d", 00:12:40.788 "assigned_rate_limits": { 00:12:40.788 "rw_ios_per_sec": 0, 00:12:40.788 "rw_mbytes_per_sec": 0, 00:12:40.788 "r_mbytes_per_sec": 0, 00:12:40.788 "w_mbytes_per_sec": 0 00:12:40.788 }, 00:12:40.788 "claimed": true, 00:12:40.788 "claim_type": "exclusive_write", 00:12:40.788 "zoned": false, 00:12:40.788 "supported_io_types": { 00:12:40.788 "read": true, 00:12:40.788 "write": true, 00:12:40.788 "unmap": true, 00:12:40.788 "flush": true, 00:12:40.788 "reset": true, 00:12:40.788 "nvme_admin": false, 00:12:40.788 "nvme_io": false, 00:12:40.788 "nvme_io_md": false, 00:12:40.788 "write_zeroes": true, 00:12:40.788 "zcopy": true, 00:12:40.788 "get_zone_info": false, 00:12:40.788 "zone_management": false, 00:12:40.788 "zone_append": false, 00:12:40.788 "compare": false, 00:12:40.788 "compare_and_write": false, 00:12:40.788 "abort": true, 00:12:40.788 "seek_hole": false, 00:12:40.788 "seek_data": false, 00:12:40.788 "copy": true, 00:12:40.788 "nvme_iov_md": false 00:12:40.788 }, 00:12:40.788 "memory_domains": [ 00:12:40.788 { 00:12:40.788 "dma_device_id": "system", 00:12:40.788 "dma_device_type": 1 00:12:40.788 }, 00:12:40.788 { 00:12:40.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.788 "dma_device_type": 2 00:12:40.788 } 00:12:40.788 ], 00:12:40.788 "driver_specific": {} 00:12:40.788 } 00:12:40.788 ] 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.788 "name": "Existed_Raid", 00:12:40.788 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:40.788 "strip_size_kb": 64, 00:12:40.788 "state": "configuring", 00:12:40.788 "raid_level": "raid0", 00:12:40.788 "superblock": true, 00:12:40.788 "num_base_bdevs": 4, 00:12:40.788 "num_base_bdevs_discovered": 2, 00:12:40.788 "num_base_bdevs_operational": 4, 00:12:40.788 "base_bdevs_list": [ 00:12:40.788 { 00:12:40.788 "name": "BaseBdev1", 00:12:40.788 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:40.788 "is_configured": true, 00:12:40.788 "data_offset": 2048, 00:12:40.788 "data_size": 63488 00:12:40.788 }, 00:12:40.788 { 00:12:40.788 "name": "BaseBdev2", 00:12:40.788 "uuid": "262967e3-c551-43d5-b4b1-6a6a3e29761d", 00:12:40.788 "is_configured": true, 00:12:40.788 "data_offset": 2048, 00:12:40.788 "data_size": 63488 00:12:40.788 }, 00:12:40.788 { 00:12:40.788 "name": "BaseBdev3", 00:12:40.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.788 "is_configured": false, 00:12:40.788 "data_offset": 0, 00:12:40.788 "data_size": 0 00:12:40.788 }, 00:12:40.788 { 00:12:40.788 "name": "BaseBdev4", 00:12:40.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.788 "is_configured": false, 00:12:40.788 "data_offset": 0, 00:12:40.788 "data_size": 0 00:12:40.788 } 00:12:40.788 ] 00:12:40.788 }' 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.788 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.355 [2024-11-20 08:46:12.069555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.355 BaseBdev3 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.355 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.355 [ 00:12:41.355 { 00:12:41.355 "name": "BaseBdev3", 00:12:41.355 "aliases": [ 00:12:41.355 "ed87a0ac-9760-4165-9870-d88154689fbe" 00:12:41.355 ], 00:12:41.355 "product_name": "Malloc disk", 00:12:41.355 "block_size": 512, 00:12:41.355 "num_blocks": 65536, 00:12:41.355 "uuid": "ed87a0ac-9760-4165-9870-d88154689fbe", 00:12:41.355 "assigned_rate_limits": { 00:12:41.355 "rw_ios_per_sec": 0, 00:12:41.355 "rw_mbytes_per_sec": 0, 00:12:41.355 "r_mbytes_per_sec": 0, 00:12:41.355 "w_mbytes_per_sec": 0 00:12:41.355 }, 00:12:41.355 "claimed": true, 00:12:41.355 "claim_type": "exclusive_write", 00:12:41.355 "zoned": false, 00:12:41.355 "supported_io_types": { 00:12:41.355 "read": true, 00:12:41.355 "write": true, 00:12:41.355 "unmap": true, 00:12:41.355 "flush": true, 00:12:41.355 "reset": true, 00:12:41.355 "nvme_admin": false, 00:12:41.355 "nvme_io": false, 00:12:41.355 "nvme_io_md": false, 00:12:41.355 "write_zeroes": true, 00:12:41.355 "zcopy": true, 00:12:41.355 "get_zone_info": false, 00:12:41.355 "zone_management": false, 00:12:41.355 "zone_append": false, 00:12:41.355 "compare": false, 00:12:41.355 "compare_and_write": false, 00:12:41.355 "abort": true, 00:12:41.355 "seek_hole": false, 00:12:41.355 "seek_data": false, 00:12:41.355 "copy": true, 00:12:41.355 "nvme_iov_md": false 00:12:41.355 }, 00:12:41.355 "memory_domains": [ 00:12:41.355 { 00:12:41.355 "dma_device_id": "system", 00:12:41.355 "dma_device_type": 1 00:12:41.355 }, 00:12:41.355 { 00:12:41.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.355 "dma_device_type": 2 00:12:41.355 } 00:12:41.356 ], 00:12:41.356 "driver_specific": {} 00:12:41.356 } 00:12:41.356 ] 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.356 "name": "Existed_Raid", 00:12:41.356 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:41.356 "strip_size_kb": 64, 00:12:41.356 "state": "configuring", 00:12:41.356 "raid_level": "raid0", 00:12:41.356 "superblock": true, 00:12:41.356 "num_base_bdevs": 4, 00:12:41.356 "num_base_bdevs_discovered": 3, 00:12:41.356 "num_base_bdevs_operational": 4, 00:12:41.356 "base_bdevs_list": [ 00:12:41.356 { 00:12:41.356 "name": "BaseBdev1", 00:12:41.356 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:41.356 "is_configured": true, 00:12:41.356 "data_offset": 2048, 00:12:41.356 "data_size": 63488 00:12:41.356 }, 00:12:41.356 { 00:12:41.356 "name": "BaseBdev2", 00:12:41.356 "uuid": "262967e3-c551-43d5-b4b1-6a6a3e29761d", 00:12:41.356 "is_configured": true, 00:12:41.356 "data_offset": 2048, 00:12:41.356 "data_size": 63488 00:12:41.356 }, 00:12:41.356 { 00:12:41.356 "name": "BaseBdev3", 00:12:41.356 "uuid": "ed87a0ac-9760-4165-9870-d88154689fbe", 00:12:41.356 "is_configured": true, 00:12:41.356 "data_offset": 2048, 00:12:41.356 "data_size": 63488 00:12:41.356 }, 00:12:41.356 { 00:12:41.356 "name": "BaseBdev4", 00:12:41.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.356 "is_configured": false, 00:12:41.356 "data_offset": 0, 00:12:41.356 "data_size": 0 00:12:41.356 } 00:12:41.356 ] 00:12:41.356 }' 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.356 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 [2024-11-20 08:46:12.636663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.923 [2024-11-20 08:46:12.637249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:41.923 [2024-11-20 08:46:12.637276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:41.923 BaseBdev4 00:12:41.923 [2024-11-20 08:46:12.637630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:41.923 [2024-11-20 08:46:12.637838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:41.923 [2024-11-20 08:46:12.637869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:41.923 [2024-11-20 08:46:12.638054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 [ 00:12:41.923 { 00:12:41.923 "name": "BaseBdev4", 00:12:41.923 "aliases": [ 00:12:41.923 "ab1e7a0c-2968-40f8-b091-bc61cc47203f" 00:12:41.923 ], 00:12:41.923 "product_name": "Malloc disk", 00:12:41.923 "block_size": 512, 00:12:41.923 "num_blocks": 65536, 00:12:41.923 "uuid": "ab1e7a0c-2968-40f8-b091-bc61cc47203f", 00:12:41.923 "assigned_rate_limits": { 00:12:41.923 "rw_ios_per_sec": 0, 00:12:41.923 "rw_mbytes_per_sec": 0, 00:12:41.923 "r_mbytes_per_sec": 0, 00:12:41.923 "w_mbytes_per_sec": 0 00:12:41.923 }, 00:12:41.923 "claimed": true, 00:12:41.923 "claim_type": "exclusive_write", 00:12:41.923 "zoned": false, 00:12:41.923 "supported_io_types": { 00:12:41.923 "read": true, 00:12:41.923 "write": true, 00:12:41.923 "unmap": true, 00:12:41.923 "flush": true, 00:12:41.923 "reset": true, 00:12:41.923 "nvme_admin": false, 00:12:41.923 "nvme_io": false, 00:12:41.923 "nvme_io_md": false, 00:12:41.923 "write_zeroes": true, 00:12:41.923 "zcopy": true, 00:12:41.923 "get_zone_info": false, 00:12:41.923 "zone_management": false, 00:12:41.923 "zone_append": false, 00:12:41.923 "compare": false, 00:12:41.923 "compare_and_write": false, 00:12:41.923 "abort": true, 00:12:41.923 "seek_hole": false, 00:12:41.923 "seek_data": false, 00:12:41.923 "copy": true, 00:12:41.923 "nvme_iov_md": false 00:12:41.923 }, 00:12:41.923 "memory_domains": [ 00:12:41.923 { 00:12:41.923 "dma_device_id": "system", 00:12:41.923 "dma_device_type": 1 00:12:41.923 }, 00:12:41.923 { 00:12:41.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.923 "dma_device_type": 2 00:12:41.923 } 00:12:41.923 ], 00:12:41.923 "driver_specific": {} 00:12:41.923 } 00:12:41.923 ] 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.923 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.923 "name": "Existed_Raid", 00:12:41.923 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:41.923 "strip_size_kb": 64, 00:12:41.923 "state": "online", 00:12:41.923 "raid_level": "raid0", 00:12:41.923 "superblock": true, 00:12:41.923 "num_base_bdevs": 4, 00:12:41.923 "num_base_bdevs_discovered": 4, 00:12:41.923 "num_base_bdevs_operational": 4, 00:12:41.923 "base_bdevs_list": [ 00:12:41.923 { 00:12:41.923 "name": "BaseBdev1", 00:12:41.923 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:41.923 "is_configured": true, 00:12:41.923 "data_offset": 2048, 00:12:41.923 "data_size": 63488 00:12:41.923 }, 00:12:41.923 { 00:12:41.923 "name": "BaseBdev2", 00:12:41.923 "uuid": "262967e3-c551-43d5-b4b1-6a6a3e29761d", 00:12:41.923 "is_configured": true, 00:12:41.923 "data_offset": 2048, 00:12:41.923 "data_size": 63488 00:12:41.923 }, 00:12:41.923 { 00:12:41.923 "name": "BaseBdev3", 00:12:41.923 "uuid": "ed87a0ac-9760-4165-9870-d88154689fbe", 00:12:41.923 "is_configured": true, 00:12:41.923 "data_offset": 2048, 00:12:41.923 "data_size": 63488 00:12:41.923 }, 00:12:41.923 { 00:12:41.923 "name": "BaseBdev4", 00:12:41.924 "uuid": "ab1e7a0c-2968-40f8-b091-bc61cc47203f", 00:12:41.924 "is_configured": true, 00:12:41.924 "data_offset": 2048, 00:12:41.924 "data_size": 63488 00:12:41.924 } 00:12:41.924 ] 00:12:41.924 }' 00:12:41.924 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.924 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.492 [2024-11-20 08:46:13.161334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.492 "name": "Existed_Raid", 00:12:42.492 "aliases": [ 00:12:42.492 "dc3fc8db-e57c-470b-ab42-d9bb682f625e" 00:12:42.492 ], 00:12:42.492 "product_name": "Raid Volume", 00:12:42.492 "block_size": 512, 00:12:42.492 "num_blocks": 253952, 00:12:42.492 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:42.492 "assigned_rate_limits": { 00:12:42.492 "rw_ios_per_sec": 0, 00:12:42.492 "rw_mbytes_per_sec": 0, 00:12:42.492 "r_mbytes_per_sec": 0, 00:12:42.492 "w_mbytes_per_sec": 0 00:12:42.492 }, 00:12:42.492 "claimed": false, 00:12:42.492 "zoned": false, 00:12:42.492 "supported_io_types": { 00:12:42.492 "read": true, 00:12:42.492 "write": true, 00:12:42.492 "unmap": true, 00:12:42.492 "flush": true, 00:12:42.492 "reset": true, 00:12:42.492 "nvme_admin": false, 00:12:42.492 "nvme_io": false, 00:12:42.492 "nvme_io_md": false, 00:12:42.492 "write_zeroes": true, 00:12:42.492 "zcopy": false, 00:12:42.492 "get_zone_info": false, 00:12:42.492 "zone_management": false, 00:12:42.492 "zone_append": false, 00:12:42.492 "compare": false, 00:12:42.492 "compare_and_write": false, 00:12:42.492 "abort": false, 00:12:42.492 "seek_hole": false, 00:12:42.492 "seek_data": false, 00:12:42.492 "copy": false, 00:12:42.492 "nvme_iov_md": false 00:12:42.492 }, 00:12:42.492 "memory_domains": [ 00:12:42.492 { 00:12:42.492 "dma_device_id": "system", 00:12:42.492 "dma_device_type": 1 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.492 "dma_device_type": 2 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "system", 00:12:42.492 "dma_device_type": 1 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.492 "dma_device_type": 2 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "system", 00:12:42.492 "dma_device_type": 1 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.492 "dma_device_type": 2 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "system", 00:12:42.492 "dma_device_type": 1 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.492 "dma_device_type": 2 00:12:42.492 } 00:12:42.492 ], 00:12:42.492 "driver_specific": { 00:12:42.492 "raid": { 00:12:42.492 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:42.492 "strip_size_kb": 64, 00:12:42.492 "state": "online", 00:12:42.492 "raid_level": "raid0", 00:12:42.492 "superblock": true, 00:12:42.492 "num_base_bdevs": 4, 00:12:42.492 "num_base_bdevs_discovered": 4, 00:12:42.492 "num_base_bdevs_operational": 4, 00:12:42.492 "base_bdevs_list": [ 00:12:42.492 { 00:12:42.492 "name": "BaseBdev1", 00:12:42.492 "uuid": "de0d29af-804c-436b-af32-fd8a460719a1", 00:12:42.492 "is_configured": true, 00:12:42.492 "data_offset": 2048, 00:12:42.492 "data_size": 63488 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "name": "BaseBdev2", 00:12:42.492 "uuid": "262967e3-c551-43d5-b4b1-6a6a3e29761d", 00:12:42.492 "is_configured": true, 00:12:42.492 "data_offset": 2048, 00:12:42.492 "data_size": 63488 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "name": "BaseBdev3", 00:12:42.492 "uuid": "ed87a0ac-9760-4165-9870-d88154689fbe", 00:12:42.492 "is_configured": true, 00:12:42.492 "data_offset": 2048, 00:12:42.492 "data_size": 63488 00:12:42.492 }, 00:12:42.492 { 00:12:42.492 "name": "BaseBdev4", 00:12:42.492 "uuid": "ab1e7a0c-2968-40f8-b091-bc61cc47203f", 00:12:42.492 "is_configured": true, 00:12:42.492 "data_offset": 2048, 00:12:42.492 "data_size": 63488 00:12:42.492 } 00:12:42.492 ] 00:12:42.492 } 00:12:42.492 } 00:12:42.492 }' 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.492 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:42.493 BaseBdev2 00:12:42.493 BaseBdev3 00:12:42.493 BaseBdev4' 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.493 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.751 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.751 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.751 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.751 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.751 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.751 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.752 [2024-11-20 08:46:13.521099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.752 [2024-11-20 08:46:13.521168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.752 [2024-11-20 08:46:13.521245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.752 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.010 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.010 "name": "Existed_Raid", 00:12:43.010 "uuid": "dc3fc8db-e57c-470b-ab42-d9bb682f625e", 00:12:43.010 "strip_size_kb": 64, 00:12:43.010 "state": "offline", 00:12:43.010 "raid_level": "raid0", 00:12:43.010 "superblock": true, 00:12:43.010 "num_base_bdevs": 4, 00:12:43.010 "num_base_bdevs_discovered": 3, 00:12:43.010 "num_base_bdevs_operational": 3, 00:12:43.010 "base_bdevs_list": [ 00:12:43.010 { 00:12:43.010 "name": null, 00:12:43.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.010 "is_configured": false, 00:12:43.010 "data_offset": 0, 00:12:43.010 "data_size": 63488 00:12:43.010 }, 00:12:43.010 { 00:12:43.010 "name": "BaseBdev2", 00:12:43.010 "uuid": "262967e3-c551-43d5-b4b1-6a6a3e29761d", 00:12:43.010 "is_configured": true, 00:12:43.010 "data_offset": 2048, 00:12:43.010 "data_size": 63488 00:12:43.010 }, 00:12:43.010 { 00:12:43.010 "name": "BaseBdev3", 00:12:43.010 "uuid": "ed87a0ac-9760-4165-9870-d88154689fbe", 00:12:43.010 "is_configured": true, 00:12:43.010 "data_offset": 2048, 00:12:43.010 "data_size": 63488 00:12:43.010 }, 00:12:43.010 { 00:12:43.010 "name": "BaseBdev4", 00:12:43.010 "uuid": "ab1e7a0c-2968-40f8-b091-bc61cc47203f", 00:12:43.010 "is_configured": true, 00:12:43.010 "data_offset": 2048, 00:12:43.010 "data_size": 63488 00:12:43.010 } 00:12:43.010 ] 00:12:43.010 }' 00:12:43.010 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.010 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.269 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.526 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.526 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.526 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.527 [2024-11-20 08:46:14.211912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.527 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.527 [2024-11-20 08:46:14.354017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.785 [2024-11-20 08:46:14.504788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:43.785 [2024-11-20 08:46:14.504867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.785 BaseBdev2 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.785 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.044 [ 00:12:44.044 { 00:12:44.044 "name": "BaseBdev2", 00:12:44.044 "aliases": [ 00:12:44.044 "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb" 00:12:44.044 ], 00:12:44.044 "product_name": "Malloc disk", 00:12:44.044 "block_size": 512, 00:12:44.044 "num_blocks": 65536, 00:12:44.044 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:44.044 "assigned_rate_limits": { 00:12:44.044 "rw_ios_per_sec": 0, 00:12:44.044 "rw_mbytes_per_sec": 0, 00:12:44.044 "r_mbytes_per_sec": 0, 00:12:44.044 "w_mbytes_per_sec": 0 00:12:44.044 }, 00:12:44.044 "claimed": false, 00:12:44.044 "zoned": false, 00:12:44.044 "supported_io_types": { 00:12:44.044 "read": true, 00:12:44.044 "write": true, 00:12:44.044 "unmap": true, 00:12:44.044 "flush": true, 00:12:44.044 "reset": true, 00:12:44.044 "nvme_admin": false, 00:12:44.044 "nvme_io": false, 00:12:44.044 "nvme_io_md": false, 00:12:44.044 "write_zeroes": true, 00:12:44.044 "zcopy": true, 00:12:44.044 "get_zone_info": false, 00:12:44.044 "zone_management": false, 00:12:44.044 "zone_append": false, 00:12:44.044 "compare": false, 00:12:44.044 "compare_and_write": false, 00:12:44.044 "abort": true, 00:12:44.044 "seek_hole": false, 00:12:44.044 "seek_data": false, 00:12:44.044 "copy": true, 00:12:44.044 "nvme_iov_md": false 00:12:44.044 }, 00:12:44.044 "memory_domains": [ 00:12:44.044 { 00:12:44.044 "dma_device_id": "system", 00:12:44.044 "dma_device_type": 1 00:12:44.044 }, 00:12:44.044 { 00:12:44.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.044 "dma_device_type": 2 00:12:44.044 } 00:12:44.044 ], 00:12:44.044 "driver_specific": {} 00:12:44.044 } 00:12:44.044 ] 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.044 BaseBdev3 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:44.044 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.045 [ 00:12:44.045 { 00:12:44.045 "name": "BaseBdev3", 00:12:44.045 "aliases": [ 00:12:44.045 "6cd52324-37d5-4ac6-8789-b85b1d409b7f" 00:12:44.045 ], 00:12:44.045 "product_name": "Malloc disk", 00:12:44.045 "block_size": 512, 00:12:44.045 "num_blocks": 65536, 00:12:44.045 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:44.045 "assigned_rate_limits": { 00:12:44.045 "rw_ios_per_sec": 0, 00:12:44.045 "rw_mbytes_per_sec": 0, 00:12:44.045 "r_mbytes_per_sec": 0, 00:12:44.045 "w_mbytes_per_sec": 0 00:12:44.045 }, 00:12:44.045 "claimed": false, 00:12:44.045 "zoned": false, 00:12:44.045 "supported_io_types": { 00:12:44.045 "read": true, 00:12:44.045 "write": true, 00:12:44.045 "unmap": true, 00:12:44.045 "flush": true, 00:12:44.045 "reset": true, 00:12:44.045 "nvme_admin": false, 00:12:44.045 "nvme_io": false, 00:12:44.045 "nvme_io_md": false, 00:12:44.045 "write_zeroes": true, 00:12:44.045 "zcopy": true, 00:12:44.045 "get_zone_info": false, 00:12:44.045 "zone_management": false, 00:12:44.045 "zone_append": false, 00:12:44.045 "compare": false, 00:12:44.045 "compare_and_write": false, 00:12:44.045 "abort": true, 00:12:44.045 "seek_hole": false, 00:12:44.045 "seek_data": false, 00:12:44.045 "copy": true, 00:12:44.045 "nvme_iov_md": false 00:12:44.045 }, 00:12:44.045 "memory_domains": [ 00:12:44.045 { 00:12:44.045 "dma_device_id": "system", 00:12:44.045 "dma_device_type": 1 00:12:44.045 }, 00:12:44.045 { 00:12:44.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.045 "dma_device_type": 2 00:12:44.045 } 00:12:44.045 ], 00:12:44.045 "driver_specific": {} 00:12:44.045 } 00:12:44.045 ] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.045 BaseBdev4 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.045 [ 00:12:44.045 { 00:12:44.045 "name": "BaseBdev4", 00:12:44.045 "aliases": [ 00:12:44.045 "d1573982-9d8d-4b79-be21-c013ea318df3" 00:12:44.045 ], 00:12:44.045 "product_name": "Malloc disk", 00:12:44.045 "block_size": 512, 00:12:44.045 "num_blocks": 65536, 00:12:44.045 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:44.045 "assigned_rate_limits": { 00:12:44.045 "rw_ios_per_sec": 0, 00:12:44.045 "rw_mbytes_per_sec": 0, 00:12:44.045 "r_mbytes_per_sec": 0, 00:12:44.045 "w_mbytes_per_sec": 0 00:12:44.045 }, 00:12:44.045 "claimed": false, 00:12:44.045 "zoned": false, 00:12:44.045 "supported_io_types": { 00:12:44.045 "read": true, 00:12:44.045 "write": true, 00:12:44.045 "unmap": true, 00:12:44.045 "flush": true, 00:12:44.045 "reset": true, 00:12:44.045 "nvme_admin": false, 00:12:44.045 "nvme_io": false, 00:12:44.045 "nvme_io_md": false, 00:12:44.045 "write_zeroes": true, 00:12:44.045 "zcopy": true, 00:12:44.045 "get_zone_info": false, 00:12:44.045 "zone_management": false, 00:12:44.045 "zone_append": false, 00:12:44.045 "compare": false, 00:12:44.045 "compare_and_write": false, 00:12:44.045 "abort": true, 00:12:44.045 "seek_hole": false, 00:12:44.045 "seek_data": false, 00:12:44.045 "copy": true, 00:12:44.045 "nvme_iov_md": false 00:12:44.045 }, 00:12:44.045 "memory_domains": [ 00:12:44.045 { 00:12:44.045 "dma_device_id": "system", 00:12:44.045 "dma_device_type": 1 00:12:44.045 }, 00:12:44.045 { 00:12:44.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.045 "dma_device_type": 2 00:12:44.045 } 00:12:44.045 ], 00:12:44.045 "driver_specific": {} 00:12:44.045 } 00:12:44.045 ] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.045 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.046 [2024-11-20 08:46:14.873922] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.046 [2024-11-20 08:46:14.873983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.046 [2024-11-20 08:46:14.874022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.046 [2024-11-20 08:46:14.876604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.046 [2024-11-20 08:46:14.876687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.046 "name": "Existed_Raid", 00:12:44.046 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:44.046 "strip_size_kb": 64, 00:12:44.046 "state": "configuring", 00:12:44.046 "raid_level": "raid0", 00:12:44.046 "superblock": true, 00:12:44.046 "num_base_bdevs": 4, 00:12:44.046 "num_base_bdevs_discovered": 3, 00:12:44.046 "num_base_bdevs_operational": 4, 00:12:44.046 "base_bdevs_list": [ 00:12:44.046 { 00:12:44.046 "name": "BaseBdev1", 00:12:44.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.046 "is_configured": false, 00:12:44.046 "data_offset": 0, 00:12:44.046 "data_size": 0 00:12:44.046 }, 00:12:44.046 { 00:12:44.046 "name": "BaseBdev2", 00:12:44.046 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:44.046 "is_configured": true, 00:12:44.046 "data_offset": 2048, 00:12:44.046 "data_size": 63488 00:12:44.046 }, 00:12:44.046 { 00:12:44.046 "name": "BaseBdev3", 00:12:44.046 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:44.046 "is_configured": true, 00:12:44.046 "data_offset": 2048, 00:12:44.046 "data_size": 63488 00:12:44.046 }, 00:12:44.046 { 00:12:44.046 "name": "BaseBdev4", 00:12:44.046 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:44.046 "is_configured": true, 00:12:44.046 "data_offset": 2048, 00:12:44.046 "data_size": 63488 00:12:44.046 } 00:12:44.046 ] 00:12:44.046 }' 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.612 [2024-11-20 08:46:15.438057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.612 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.612 "name": "Existed_Raid", 00:12:44.612 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:44.612 "strip_size_kb": 64, 00:12:44.613 "state": "configuring", 00:12:44.613 "raid_level": "raid0", 00:12:44.613 "superblock": true, 00:12:44.613 "num_base_bdevs": 4, 00:12:44.613 "num_base_bdevs_discovered": 2, 00:12:44.613 "num_base_bdevs_operational": 4, 00:12:44.613 "base_bdevs_list": [ 00:12:44.613 { 00:12:44.613 "name": "BaseBdev1", 00:12:44.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.613 "is_configured": false, 00:12:44.613 "data_offset": 0, 00:12:44.613 "data_size": 0 00:12:44.613 }, 00:12:44.613 { 00:12:44.613 "name": null, 00:12:44.613 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:44.613 "is_configured": false, 00:12:44.613 "data_offset": 0, 00:12:44.613 "data_size": 63488 00:12:44.613 }, 00:12:44.613 { 00:12:44.613 "name": "BaseBdev3", 00:12:44.613 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:44.613 "is_configured": true, 00:12:44.613 "data_offset": 2048, 00:12:44.613 "data_size": 63488 00:12:44.613 }, 00:12:44.613 { 00:12:44.613 "name": "BaseBdev4", 00:12:44.613 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:44.613 "is_configured": true, 00:12:44.613 "data_offset": 2048, 00:12:44.613 "data_size": 63488 00:12:44.613 } 00:12:44.613 ] 00:12:44.613 }' 00:12:44.613 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.613 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.179 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:45.179 08:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.179 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.179 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.179 08:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.179 [2024-11-20 08:46:16.061033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.179 BaseBdev1 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.179 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.179 [ 00:12:45.179 { 00:12:45.179 "name": "BaseBdev1", 00:12:45.179 "aliases": [ 00:12:45.180 "8d7c0359-2918-4464-91b6-802626b868b5" 00:12:45.180 ], 00:12:45.180 "product_name": "Malloc disk", 00:12:45.180 "block_size": 512, 00:12:45.180 "num_blocks": 65536, 00:12:45.180 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:45.180 "assigned_rate_limits": { 00:12:45.180 "rw_ios_per_sec": 0, 00:12:45.180 "rw_mbytes_per_sec": 0, 00:12:45.180 "r_mbytes_per_sec": 0, 00:12:45.180 "w_mbytes_per_sec": 0 00:12:45.180 }, 00:12:45.180 "claimed": true, 00:12:45.180 "claim_type": "exclusive_write", 00:12:45.180 "zoned": false, 00:12:45.180 "supported_io_types": { 00:12:45.180 "read": true, 00:12:45.180 "write": true, 00:12:45.180 "unmap": true, 00:12:45.180 "flush": true, 00:12:45.180 "reset": true, 00:12:45.180 "nvme_admin": false, 00:12:45.180 "nvme_io": false, 00:12:45.180 "nvme_io_md": false, 00:12:45.180 "write_zeroes": true, 00:12:45.180 "zcopy": true, 00:12:45.180 "get_zone_info": false, 00:12:45.180 "zone_management": false, 00:12:45.180 "zone_append": false, 00:12:45.180 "compare": false, 00:12:45.180 "compare_and_write": false, 00:12:45.180 "abort": true, 00:12:45.180 "seek_hole": false, 00:12:45.180 "seek_data": false, 00:12:45.180 "copy": true, 00:12:45.180 "nvme_iov_md": false 00:12:45.180 }, 00:12:45.180 "memory_domains": [ 00:12:45.180 { 00:12:45.180 "dma_device_id": "system", 00:12:45.180 "dma_device_type": 1 00:12:45.180 }, 00:12:45.180 { 00:12:45.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.180 "dma_device_type": 2 00:12:45.180 } 00:12:45.180 ], 00:12:45.180 "driver_specific": {} 00:12:45.180 } 00:12:45.180 ] 00:12:45.180 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.180 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.438 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.439 "name": "Existed_Raid", 00:12:45.439 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:45.439 "strip_size_kb": 64, 00:12:45.439 "state": "configuring", 00:12:45.439 "raid_level": "raid0", 00:12:45.439 "superblock": true, 00:12:45.439 "num_base_bdevs": 4, 00:12:45.439 "num_base_bdevs_discovered": 3, 00:12:45.439 "num_base_bdevs_operational": 4, 00:12:45.439 "base_bdevs_list": [ 00:12:45.439 { 00:12:45.439 "name": "BaseBdev1", 00:12:45.439 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:45.439 "is_configured": true, 00:12:45.439 "data_offset": 2048, 00:12:45.439 "data_size": 63488 00:12:45.439 }, 00:12:45.439 { 00:12:45.439 "name": null, 00:12:45.439 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:45.439 "is_configured": false, 00:12:45.439 "data_offset": 0, 00:12:45.439 "data_size": 63488 00:12:45.439 }, 00:12:45.439 { 00:12:45.439 "name": "BaseBdev3", 00:12:45.439 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:45.439 "is_configured": true, 00:12:45.439 "data_offset": 2048, 00:12:45.439 "data_size": 63488 00:12:45.439 }, 00:12:45.439 { 00:12:45.439 "name": "BaseBdev4", 00:12:45.439 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:45.439 "is_configured": true, 00:12:45.439 "data_offset": 2048, 00:12:45.439 "data_size": 63488 00:12:45.439 } 00:12:45.439 ] 00:12:45.439 }' 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.439 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.697 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.697 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.697 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.697 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:45.697 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.956 [2024-11-20 08:46:16.629226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.956 "name": "Existed_Raid", 00:12:45.956 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:45.956 "strip_size_kb": 64, 00:12:45.956 "state": "configuring", 00:12:45.956 "raid_level": "raid0", 00:12:45.956 "superblock": true, 00:12:45.956 "num_base_bdevs": 4, 00:12:45.956 "num_base_bdevs_discovered": 2, 00:12:45.956 "num_base_bdevs_operational": 4, 00:12:45.956 "base_bdevs_list": [ 00:12:45.956 { 00:12:45.956 "name": "BaseBdev1", 00:12:45.956 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:45.956 "is_configured": true, 00:12:45.956 "data_offset": 2048, 00:12:45.956 "data_size": 63488 00:12:45.956 }, 00:12:45.956 { 00:12:45.956 "name": null, 00:12:45.956 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:45.956 "is_configured": false, 00:12:45.956 "data_offset": 0, 00:12:45.956 "data_size": 63488 00:12:45.956 }, 00:12:45.956 { 00:12:45.956 "name": null, 00:12:45.956 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:45.956 "is_configured": false, 00:12:45.956 "data_offset": 0, 00:12:45.956 "data_size": 63488 00:12:45.956 }, 00:12:45.956 { 00:12:45.956 "name": "BaseBdev4", 00:12:45.956 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:45.956 "is_configured": true, 00:12:45.956 "data_offset": 2048, 00:12:45.956 "data_size": 63488 00:12:45.956 } 00:12:45.956 ] 00:12:45.956 }' 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.956 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.524 [2024-11-20 08:46:17.201358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.524 "name": "Existed_Raid", 00:12:46.524 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:46.524 "strip_size_kb": 64, 00:12:46.524 "state": "configuring", 00:12:46.524 "raid_level": "raid0", 00:12:46.524 "superblock": true, 00:12:46.524 "num_base_bdevs": 4, 00:12:46.524 "num_base_bdevs_discovered": 3, 00:12:46.524 "num_base_bdevs_operational": 4, 00:12:46.524 "base_bdevs_list": [ 00:12:46.524 { 00:12:46.524 "name": "BaseBdev1", 00:12:46.524 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:46.524 "is_configured": true, 00:12:46.524 "data_offset": 2048, 00:12:46.524 "data_size": 63488 00:12:46.524 }, 00:12:46.524 { 00:12:46.524 "name": null, 00:12:46.524 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:46.524 "is_configured": false, 00:12:46.524 "data_offset": 0, 00:12:46.524 "data_size": 63488 00:12:46.524 }, 00:12:46.524 { 00:12:46.524 "name": "BaseBdev3", 00:12:46.524 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:46.524 "is_configured": true, 00:12:46.524 "data_offset": 2048, 00:12:46.524 "data_size": 63488 00:12:46.524 }, 00:12:46.524 { 00:12:46.524 "name": "BaseBdev4", 00:12:46.524 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:46.524 "is_configured": true, 00:12:46.524 "data_offset": 2048, 00:12:46.524 "data_size": 63488 00:12:46.524 } 00:12:46.524 ] 00:12:46.524 }' 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.524 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.090 [2024-11-20 08:46:17.781572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.090 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.090 "name": "Existed_Raid", 00:12:47.090 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:47.090 "strip_size_kb": 64, 00:12:47.090 "state": "configuring", 00:12:47.090 "raid_level": "raid0", 00:12:47.090 "superblock": true, 00:12:47.090 "num_base_bdevs": 4, 00:12:47.090 "num_base_bdevs_discovered": 2, 00:12:47.090 "num_base_bdevs_operational": 4, 00:12:47.090 "base_bdevs_list": [ 00:12:47.090 { 00:12:47.090 "name": null, 00:12:47.090 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:47.090 "is_configured": false, 00:12:47.090 "data_offset": 0, 00:12:47.091 "data_size": 63488 00:12:47.091 }, 00:12:47.091 { 00:12:47.091 "name": null, 00:12:47.091 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:47.091 "is_configured": false, 00:12:47.091 "data_offset": 0, 00:12:47.091 "data_size": 63488 00:12:47.091 }, 00:12:47.091 { 00:12:47.091 "name": "BaseBdev3", 00:12:47.091 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:47.091 "is_configured": true, 00:12:47.091 "data_offset": 2048, 00:12:47.091 "data_size": 63488 00:12:47.091 }, 00:12:47.091 { 00:12:47.091 "name": "BaseBdev4", 00:12:47.091 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:47.091 "is_configured": true, 00:12:47.091 "data_offset": 2048, 00:12:47.091 "data_size": 63488 00:12:47.091 } 00:12:47.091 ] 00:12:47.091 }' 00:12:47.091 08:46:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.091 08:46:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.714 [2024-11-20 08:46:18.446598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.714 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.714 "name": "Existed_Raid", 00:12:47.714 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:47.714 "strip_size_kb": 64, 00:12:47.714 "state": "configuring", 00:12:47.714 "raid_level": "raid0", 00:12:47.714 "superblock": true, 00:12:47.714 "num_base_bdevs": 4, 00:12:47.714 "num_base_bdevs_discovered": 3, 00:12:47.714 "num_base_bdevs_operational": 4, 00:12:47.714 "base_bdevs_list": [ 00:12:47.714 { 00:12:47.714 "name": null, 00:12:47.714 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:47.714 "is_configured": false, 00:12:47.714 "data_offset": 0, 00:12:47.714 "data_size": 63488 00:12:47.714 }, 00:12:47.714 { 00:12:47.714 "name": "BaseBdev2", 00:12:47.714 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:47.714 "is_configured": true, 00:12:47.714 "data_offset": 2048, 00:12:47.714 "data_size": 63488 00:12:47.714 }, 00:12:47.714 { 00:12:47.714 "name": "BaseBdev3", 00:12:47.714 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:47.714 "is_configured": true, 00:12:47.715 "data_offset": 2048, 00:12:47.715 "data_size": 63488 00:12:47.715 }, 00:12:47.715 { 00:12:47.715 "name": "BaseBdev4", 00:12:47.715 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:47.715 "is_configured": true, 00:12:47.715 "data_offset": 2048, 00:12:47.715 "data_size": 63488 00:12:47.715 } 00:12:47.715 ] 00:12:47.715 }' 00:12:47.715 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.715 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.293 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.293 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.293 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.293 08:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.293 08:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d7c0359-2918-4464-91b6-802626b868b5 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.293 [2024-11-20 08:46:19.092515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:48.293 [2024-11-20 08:46:19.092846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:48.293 [2024-11-20 08:46:19.092867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:48.293 NewBaseBdev 00:12:48.293 [2024-11-20 08:46:19.093203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:48.293 [2024-11-20 08:46:19.093416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:48.293 [2024-11-20 08:46:19.093442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:48.293 [2024-11-20 08:46:19.093611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.293 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.294 [ 00:12:48.294 { 00:12:48.294 "name": "NewBaseBdev", 00:12:48.294 "aliases": [ 00:12:48.294 "8d7c0359-2918-4464-91b6-802626b868b5" 00:12:48.294 ], 00:12:48.294 "product_name": "Malloc disk", 00:12:48.294 "block_size": 512, 00:12:48.294 "num_blocks": 65536, 00:12:48.294 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:48.294 "assigned_rate_limits": { 00:12:48.294 "rw_ios_per_sec": 0, 00:12:48.294 "rw_mbytes_per_sec": 0, 00:12:48.294 "r_mbytes_per_sec": 0, 00:12:48.294 "w_mbytes_per_sec": 0 00:12:48.294 }, 00:12:48.294 "claimed": true, 00:12:48.294 "claim_type": "exclusive_write", 00:12:48.294 "zoned": false, 00:12:48.294 "supported_io_types": { 00:12:48.294 "read": true, 00:12:48.294 "write": true, 00:12:48.294 "unmap": true, 00:12:48.294 "flush": true, 00:12:48.294 "reset": true, 00:12:48.294 "nvme_admin": false, 00:12:48.294 "nvme_io": false, 00:12:48.294 "nvme_io_md": false, 00:12:48.294 "write_zeroes": true, 00:12:48.294 "zcopy": true, 00:12:48.294 "get_zone_info": false, 00:12:48.294 "zone_management": false, 00:12:48.294 "zone_append": false, 00:12:48.294 "compare": false, 00:12:48.294 "compare_and_write": false, 00:12:48.294 "abort": true, 00:12:48.294 "seek_hole": false, 00:12:48.294 "seek_data": false, 00:12:48.294 "copy": true, 00:12:48.294 "nvme_iov_md": false 00:12:48.294 }, 00:12:48.294 "memory_domains": [ 00:12:48.294 { 00:12:48.294 "dma_device_id": "system", 00:12:48.294 "dma_device_type": 1 00:12:48.294 }, 00:12:48.294 { 00:12:48.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.294 "dma_device_type": 2 00:12:48.294 } 00:12:48.294 ], 00:12:48.294 "driver_specific": {} 00:12:48.294 } 00:12:48.294 ] 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.294 "name": "Existed_Raid", 00:12:48.294 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:48.294 "strip_size_kb": 64, 00:12:48.294 "state": "online", 00:12:48.294 "raid_level": "raid0", 00:12:48.294 "superblock": true, 00:12:48.294 "num_base_bdevs": 4, 00:12:48.294 "num_base_bdevs_discovered": 4, 00:12:48.294 "num_base_bdevs_operational": 4, 00:12:48.294 "base_bdevs_list": [ 00:12:48.294 { 00:12:48.294 "name": "NewBaseBdev", 00:12:48.294 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:48.294 "is_configured": true, 00:12:48.294 "data_offset": 2048, 00:12:48.294 "data_size": 63488 00:12:48.294 }, 00:12:48.294 { 00:12:48.294 "name": "BaseBdev2", 00:12:48.294 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:48.294 "is_configured": true, 00:12:48.294 "data_offset": 2048, 00:12:48.294 "data_size": 63488 00:12:48.294 }, 00:12:48.294 { 00:12:48.294 "name": "BaseBdev3", 00:12:48.294 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:48.294 "is_configured": true, 00:12:48.294 "data_offset": 2048, 00:12:48.294 "data_size": 63488 00:12:48.294 }, 00:12:48.294 { 00:12:48.294 "name": "BaseBdev4", 00:12:48.294 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:48.294 "is_configured": true, 00:12:48.294 "data_offset": 2048, 00:12:48.294 "data_size": 63488 00:12:48.294 } 00:12:48.294 ] 00:12:48.294 }' 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.294 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.861 [2024-11-20 08:46:19.645179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.861 "name": "Existed_Raid", 00:12:48.861 "aliases": [ 00:12:48.861 "7dff0037-c34e-4482-9e68-f054731e95fd" 00:12:48.861 ], 00:12:48.861 "product_name": "Raid Volume", 00:12:48.861 "block_size": 512, 00:12:48.861 "num_blocks": 253952, 00:12:48.861 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:48.861 "assigned_rate_limits": { 00:12:48.861 "rw_ios_per_sec": 0, 00:12:48.861 "rw_mbytes_per_sec": 0, 00:12:48.861 "r_mbytes_per_sec": 0, 00:12:48.861 "w_mbytes_per_sec": 0 00:12:48.861 }, 00:12:48.861 "claimed": false, 00:12:48.861 "zoned": false, 00:12:48.861 "supported_io_types": { 00:12:48.861 "read": true, 00:12:48.861 "write": true, 00:12:48.861 "unmap": true, 00:12:48.861 "flush": true, 00:12:48.861 "reset": true, 00:12:48.861 "nvme_admin": false, 00:12:48.861 "nvme_io": false, 00:12:48.861 "nvme_io_md": false, 00:12:48.861 "write_zeroes": true, 00:12:48.861 "zcopy": false, 00:12:48.861 "get_zone_info": false, 00:12:48.861 "zone_management": false, 00:12:48.861 "zone_append": false, 00:12:48.861 "compare": false, 00:12:48.861 "compare_and_write": false, 00:12:48.861 "abort": false, 00:12:48.861 "seek_hole": false, 00:12:48.861 "seek_data": false, 00:12:48.861 "copy": false, 00:12:48.861 "nvme_iov_md": false 00:12:48.861 }, 00:12:48.861 "memory_domains": [ 00:12:48.861 { 00:12:48.861 "dma_device_id": "system", 00:12:48.861 "dma_device_type": 1 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.861 "dma_device_type": 2 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "system", 00:12:48.861 "dma_device_type": 1 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.861 "dma_device_type": 2 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "system", 00:12:48.861 "dma_device_type": 1 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.861 "dma_device_type": 2 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "system", 00:12:48.861 "dma_device_type": 1 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.861 "dma_device_type": 2 00:12:48.861 } 00:12:48.861 ], 00:12:48.861 "driver_specific": { 00:12:48.861 "raid": { 00:12:48.861 "uuid": "7dff0037-c34e-4482-9e68-f054731e95fd", 00:12:48.861 "strip_size_kb": 64, 00:12:48.861 "state": "online", 00:12:48.861 "raid_level": "raid0", 00:12:48.861 "superblock": true, 00:12:48.861 "num_base_bdevs": 4, 00:12:48.861 "num_base_bdevs_discovered": 4, 00:12:48.861 "num_base_bdevs_operational": 4, 00:12:48.861 "base_bdevs_list": [ 00:12:48.861 { 00:12:48.861 "name": "NewBaseBdev", 00:12:48.861 "uuid": "8d7c0359-2918-4464-91b6-802626b868b5", 00:12:48.861 "is_configured": true, 00:12:48.861 "data_offset": 2048, 00:12:48.861 "data_size": 63488 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "name": "BaseBdev2", 00:12:48.861 "uuid": "57ba4d6a-a0d5-4d2d-9f03-1e01863c12fb", 00:12:48.861 "is_configured": true, 00:12:48.861 "data_offset": 2048, 00:12:48.861 "data_size": 63488 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "name": "BaseBdev3", 00:12:48.861 "uuid": "6cd52324-37d5-4ac6-8789-b85b1d409b7f", 00:12:48.861 "is_configured": true, 00:12:48.861 "data_offset": 2048, 00:12:48.861 "data_size": 63488 00:12:48.861 }, 00:12:48.861 { 00:12:48.861 "name": "BaseBdev4", 00:12:48.861 "uuid": "d1573982-9d8d-4b79-be21-c013ea318df3", 00:12:48.861 "is_configured": true, 00:12:48.861 "data_offset": 2048, 00:12:48.861 "data_size": 63488 00:12:48.861 } 00:12:48.861 ] 00:12:48.861 } 00:12:48.861 } 00:12:48.861 }' 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:48.861 BaseBdev2 00:12:48.861 BaseBdev3 00:12:48.861 BaseBdev4' 00:12:48.861 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.120 08:46:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.120 [2024-11-20 08:46:20.020819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.120 [2024-11-20 08:46:20.021855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.120 [2024-11-20 08:46:20.021979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.120 [2024-11-20 08:46:20.022085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.120 [2024-11-20 08:46:20.022103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70130 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70130 ']' 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70130 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:49.120 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.378 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70130 00:12:49.378 killing process with pid 70130 00:12:49.378 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.378 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.379 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70130' 00:12:49.379 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70130 00:12:49.379 [2024-11-20 08:46:20.060631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.379 08:46:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70130 00:12:49.637 [2024-11-20 08:46:20.412788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.573 ************************************ 00:12:50.573 END TEST raid_state_function_test_sb 00:12:50.573 ************************************ 00:12:50.573 08:46:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:50.573 00:12:50.573 real 0m12.742s 00:12:50.573 user 0m21.155s 00:12:50.573 sys 0m1.745s 00:12:50.573 08:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.573 08:46:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.831 08:46:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:50.831 08:46:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:50.831 08:46:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.831 08:46:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.831 ************************************ 00:12:50.831 START TEST raid_superblock_test 00:12:50.831 ************************************ 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70810 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:50.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70810 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70810 ']' 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.831 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.831 [2024-11-20 08:46:21.610749] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:50.831 [2024-11-20 08:46:21.611177] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70810 ] 00:12:51.090 [2024-11-20 08:46:21.794263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.090 [2024-11-20 08:46:21.925299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.348 [2024-11-20 08:46:22.127395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.348 [2024-11-20 08:46:22.127472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.056 malloc1 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.056 [2024-11-20 08:46:22.665530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:52.056 [2024-11-20 08:46:22.665774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.056 [2024-11-20 08:46:22.665948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:52.056 [2024-11-20 08:46:22.666076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.056 [2024-11-20 08:46:22.669127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.056 [2024-11-20 08:46:22.669319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:52.056 pt1 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.056 malloc2 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.056 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 [2024-11-20 08:46:22.721701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:52.057 [2024-11-20 08:46:22.721916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.057 [2024-11-20 08:46:22.721996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:52.057 [2024-11-20 08:46:22.722121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.057 [2024-11-20 08:46:22.725058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.057 [2024-11-20 08:46:22.725241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:52.057 pt2 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 malloc3 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 [2024-11-20 08:46:22.786668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:52.057 [2024-11-20 08:46:22.786867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.057 [2024-11-20 08:46:22.786948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:52.057 [2024-11-20 08:46:22.787075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.057 [2024-11-20 08:46:22.790086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.057 [2024-11-20 08:46:22.790251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:52.057 pt3 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 malloc4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 [2024-11-20 08:46:22.842957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:52.057 [2024-11-20 08:46:22.843165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.057 [2024-11-20 08:46:22.843256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:52.057 [2024-11-20 08:46:22.843439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.057 [2024-11-20 08:46:22.846272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.057 [2024-11-20 08:46:22.846315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:52.057 pt4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 [2024-11-20 08:46:22.851106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:52.057 [2024-11-20 08:46:22.853553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:52.057 [2024-11-20 08:46:22.853791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:52.057 [2024-11-20 08:46:22.853904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:52.057 [2024-11-20 08:46:22.854184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:52.057 [2024-11-20 08:46:22.854204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:52.057 [2024-11-20 08:46:22.854525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:52.057 [2024-11-20 08:46:22.854752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:52.057 [2024-11-20 08:46:22.854774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:52.057 [2024-11-20 08:46:22.854999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.057 "name": "raid_bdev1", 00:12:52.057 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:52.057 "strip_size_kb": 64, 00:12:52.057 "state": "online", 00:12:52.057 "raid_level": "raid0", 00:12:52.057 "superblock": true, 00:12:52.057 "num_base_bdevs": 4, 00:12:52.057 "num_base_bdevs_discovered": 4, 00:12:52.057 "num_base_bdevs_operational": 4, 00:12:52.057 "base_bdevs_list": [ 00:12:52.057 { 00:12:52.057 "name": "pt1", 00:12:52.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.057 "is_configured": true, 00:12:52.057 "data_offset": 2048, 00:12:52.057 "data_size": 63488 00:12:52.057 }, 00:12:52.057 { 00:12:52.057 "name": "pt2", 00:12:52.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.057 "is_configured": true, 00:12:52.057 "data_offset": 2048, 00:12:52.057 "data_size": 63488 00:12:52.057 }, 00:12:52.057 { 00:12:52.057 "name": "pt3", 00:12:52.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.057 "is_configured": true, 00:12:52.057 "data_offset": 2048, 00:12:52.057 "data_size": 63488 00:12:52.057 }, 00:12:52.057 { 00:12:52.057 "name": "pt4", 00:12:52.057 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:52.057 "is_configured": true, 00:12:52.057 "data_offset": 2048, 00:12:52.057 "data_size": 63488 00:12:52.057 } 00:12:52.057 ] 00:12:52.057 }' 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.057 08:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:52.625 [2024-11-20 08:46:23.363664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:52.625 "name": "raid_bdev1", 00:12:52.625 "aliases": [ 00:12:52.625 "bc83ab20-212c-48e5-952c-441df5e1163b" 00:12:52.625 ], 00:12:52.625 "product_name": "Raid Volume", 00:12:52.625 "block_size": 512, 00:12:52.625 "num_blocks": 253952, 00:12:52.625 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:52.625 "assigned_rate_limits": { 00:12:52.625 "rw_ios_per_sec": 0, 00:12:52.625 "rw_mbytes_per_sec": 0, 00:12:52.625 "r_mbytes_per_sec": 0, 00:12:52.625 "w_mbytes_per_sec": 0 00:12:52.625 }, 00:12:52.625 "claimed": false, 00:12:52.625 "zoned": false, 00:12:52.625 "supported_io_types": { 00:12:52.625 "read": true, 00:12:52.625 "write": true, 00:12:52.625 "unmap": true, 00:12:52.625 "flush": true, 00:12:52.625 "reset": true, 00:12:52.625 "nvme_admin": false, 00:12:52.625 "nvme_io": false, 00:12:52.625 "nvme_io_md": false, 00:12:52.625 "write_zeroes": true, 00:12:52.625 "zcopy": false, 00:12:52.625 "get_zone_info": false, 00:12:52.625 "zone_management": false, 00:12:52.625 "zone_append": false, 00:12:52.625 "compare": false, 00:12:52.625 "compare_and_write": false, 00:12:52.625 "abort": false, 00:12:52.625 "seek_hole": false, 00:12:52.625 "seek_data": false, 00:12:52.625 "copy": false, 00:12:52.625 "nvme_iov_md": false 00:12:52.625 }, 00:12:52.625 "memory_domains": [ 00:12:52.625 { 00:12:52.625 "dma_device_id": "system", 00:12:52.625 "dma_device_type": 1 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.625 "dma_device_type": 2 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "system", 00:12:52.625 "dma_device_type": 1 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.625 "dma_device_type": 2 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "system", 00:12:52.625 "dma_device_type": 1 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.625 "dma_device_type": 2 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "system", 00:12:52.625 "dma_device_type": 1 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.625 "dma_device_type": 2 00:12:52.625 } 00:12:52.625 ], 00:12:52.625 "driver_specific": { 00:12:52.625 "raid": { 00:12:52.625 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:52.625 "strip_size_kb": 64, 00:12:52.625 "state": "online", 00:12:52.625 "raid_level": "raid0", 00:12:52.625 "superblock": true, 00:12:52.625 "num_base_bdevs": 4, 00:12:52.625 "num_base_bdevs_discovered": 4, 00:12:52.625 "num_base_bdevs_operational": 4, 00:12:52.625 "base_bdevs_list": [ 00:12:52.625 { 00:12:52.625 "name": "pt1", 00:12:52.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.625 "is_configured": true, 00:12:52.625 "data_offset": 2048, 00:12:52.625 "data_size": 63488 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "name": "pt2", 00:12:52.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.625 "is_configured": true, 00:12:52.625 "data_offset": 2048, 00:12:52.625 "data_size": 63488 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "name": "pt3", 00:12:52.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.625 "is_configured": true, 00:12:52.625 "data_offset": 2048, 00:12:52.625 "data_size": 63488 00:12:52.625 }, 00:12:52.625 { 00:12:52.625 "name": "pt4", 00:12:52.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:52.625 "is_configured": true, 00:12:52.625 "data_offset": 2048, 00:12:52.625 "data_size": 63488 00:12:52.625 } 00:12:52.625 ] 00:12:52.625 } 00:12:52.625 } 00:12:52.625 }' 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:52.625 pt2 00:12:52.625 pt3 00:12:52.625 pt4' 00:12:52.625 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.626 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:52.626 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.626 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:52.626 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.626 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.626 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 [2024-11-20 08:46:23.735770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc83ab20-212c-48e5-952c-441df5e1163b 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc83ab20-212c-48e5-952c-441df5e1163b ']' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 [2024-11-20 08:46:23.775424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.886 [2024-11-20 08:46:23.775568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.886 [2024-11-20 08:46:23.775801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.886 [2024-11-20 08:46:23.776027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.886 [2024-11-20 08:46:23.776209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.886 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 [2024-11-20 08:46:23.935501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:53.146 [2024-11-20 08:46:23.937959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:53.146 [2024-11-20 08:46:23.938022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:53.146 [2024-11-20 08:46:23.938078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:53.146 [2024-11-20 08:46:23.938306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:53.146 [2024-11-20 08:46:23.938524] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:53.146 [2024-11-20 08:46:23.938747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:53.146 [2024-11-20 08:46:23.938918] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:53.146 [2024-11-20 08:46:23.939170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.146 [2024-11-20 08:46:23.939295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:53.146 request: 00:12:53.146 { 00:12:53.146 "name": "raid_bdev1", 00:12:53.146 "raid_level": "raid0", 00:12:53.146 "base_bdevs": [ 00:12:53.146 "malloc1", 00:12:53.146 "malloc2", 00:12:53.146 "malloc3", 00:12:53.146 "malloc4" 00:12:53.146 ], 00:12:53.146 "strip_size_kb": 64, 00:12:53.146 "superblock": false, 00:12:53.146 "method": "bdev_raid_create", 00:12:53.146 "req_id": 1 00:12:53.146 } 00:12:53.146 Got JSON-RPC error response 00:12:53.146 response: 00:12:53.146 { 00:12:53.146 "code": -17, 00:12:53.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:53.146 } 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 08:46:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 [2024-11-20 08:46:24.003689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:53.146 [2024-11-20 08:46:24.003774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.146 [2024-11-20 08:46:24.003802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:53.146 [2024-11-20 08:46:24.003820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.146 [2024-11-20 08:46:24.006763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.146 [2024-11-20 08:46:24.006821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:53.146 [2024-11-20 08:46:24.006937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:53.146 [2024-11-20 08:46:24.007025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:53.146 pt1 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.405 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.405 "name": "raid_bdev1", 00:12:53.405 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:53.405 "strip_size_kb": 64, 00:12:53.405 "state": "configuring", 00:12:53.405 "raid_level": "raid0", 00:12:53.405 "superblock": true, 00:12:53.405 "num_base_bdevs": 4, 00:12:53.405 "num_base_bdevs_discovered": 1, 00:12:53.405 "num_base_bdevs_operational": 4, 00:12:53.405 "base_bdevs_list": [ 00:12:53.405 { 00:12:53.405 "name": "pt1", 00:12:53.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.405 "is_configured": true, 00:12:53.405 "data_offset": 2048, 00:12:53.405 "data_size": 63488 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "name": null, 00:12:53.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.405 "is_configured": false, 00:12:53.405 "data_offset": 2048, 00:12:53.405 "data_size": 63488 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "name": null, 00:12:53.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.405 "is_configured": false, 00:12:53.405 "data_offset": 2048, 00:12:53.405 "data_size": 63488 00:12:53.405 }, 00:12:53.405 { 00:12:53.405 "name": null, 00:12:53.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:53.405 "is_configured": false, 00:12:53.405 "data_offset": 2048, 00:12:53.405 "data_size": 63488 00:12:53.405 } 00:12:53.405 ] 00:12:53.405 }' 00:12:53.405 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.405 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.664 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:53.664 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:53.664 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.664 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.664 [2024-11-20 08:46:24.519857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:53.664 [2024-11-20 08:46:24.519956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.664 [2024-11-20 08:46:24.519988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:53.664 [2024-11-20 08:46:24.520012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.664 [2024-11-20 08:46:24.520594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.665 [2024-11-20 08:46:24.520652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:53.665 [2024-11-20 08:46:24.520757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:53.665 [2024-11-20 08:46:24.520797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:53.665 pt2 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.665 [2024-11-20 08:46:24.527918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.665 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.923 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.923 "name": "raid_bdev1", 00:12:53.923 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:53.923 "strip_size_kb": 64, 00:12:53.923 "state": "configuring", 00:12:53.923 "raid_level": "raid0", 00:12:53.923 "superblock": true, 00:12:53.923 "num_base_bdevs": 4, 00:12:53.923 "num_base_bdevs_discovered": 1, 00:12:53.923 "num_base_bdevs_operational": 4, 00:12:53.923 "base_bdevs_list": [ 00:12:53.923 { 00:12:53.923 "name": "pt1", 00:12:53.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:53.923 "is_configured": true, 00:12:53.923 "data_offset": 2048, 00:12:53.923 "data_size": 63488 00:12:53.923 }, 00:12:53.923 { 00:12:53.923 "name": null, 00:12:53.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.923 "is_configured": false, 00:12:53.923 "data_offset": 0, 00:12:53.923 "data_size": 63488 00:12:53.923 }, 00:12:53.923 { 00:12:53.923 "name": null, 00:12:53.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.923 "is_configured": false, 00:12:53.923 "data_offset": 2048, 00:12:53.923 "data_size": 63488 00:12:53.923 }, 00:12:53.923 { 00:12:53.923 "name": null, 00:12:53.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:53.923 "is_configured": false, 00:12:53.923 "data_offset": 2048, 00:12:53.923 "data_size": 63488 00:12:53.923 } 00:12:53.923 ] 00:12:53.923 }' 00:12:53.923 08:46:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.923 08:46:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.183 [2024-11-20 08:46:25.072374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:54.183 [2024-11-20 08:46:25.072458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.183 [2024-11-20 08:46:25.072490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:54.183 [2024-11-20 08:46:25.072505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.183 [2024-11-20 08:46:25.073079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.183 [2024-11-20 08:46:25.073114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:54.183 [2024-11-20 08:46:25.073236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:54.183 [2024-11-20 08:46:25.073271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:54.183 pt2 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.183 [2024-11-20 08:46:25.080332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:54.183 [2024-11-20 08:46:25.080395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.183 [2024-11-20 08:46:25.080430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:54.183 [2024-11-20 08:46:25.080447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.183 [2024-11-20 08:46:25.080902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.183 [2024-11-20 08:46:25.080938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:54.183 [2024-11-20 08:46:25.081022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:54.183 [2024-11-20 08:46:25.081059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:54.183 pt3 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.183 [2024-11-20 08:46:25.088311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:54.183 [2024-11-20 08:46:25.088374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.183 [2024-11-20 08:46:25.088403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:54.183 [2024-11-20 08:46:25.088417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.183 [2024-11-20 08:46:25.088893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.183 [2024-11-20 08:46:25.088930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:54.183 [2024-11-20 08:46:25.089018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:54.183 [2024-11-20 08:46:25.089048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:54.183 [2024-11-20 08:46:25.089242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:54.183 [2024-11-20 08:46:25.089260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:54.183 [2024-11-20 08:46:25.089557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:54.183 [2024-11-20 08:46:25.089755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:54.183 [2024-11-20 08:46:25.089778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:54.183 [2024-11-20 08:46:25.089935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.183 pt4 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.183 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.443 "name": "raid_bdev1", 00:12:54.443 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:54.443 "strip_size_kb": 64, 00:12:54.443 "state": "online", 00:12:54.443 "raid_level": "raid0", 00:12:54.443 "superblock": true, 00:12:54.443 "num_base_bdevs": 4, 00:12:54.443 "num_base_bdevs_discovered": 4, 00:12:54.443 "num_base_bdevs_operational": 4, 00:12:54.443 "base_bdevs_list": [ 00:12:54.443 { 00:12:54.443 "name": "pt1", 00:12:54.443 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.443 "is_configured": true, 00:12:54.443 "data_offset": 2048, 00:12:54.443 "data_size": 63488 00:12:54.443 }, 00:12:54.443 { 00:12:54.443 "name": "pt2", 00:12:54.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.443 "is_configured": true, 00:12:54.443 "data_offset": 2048, 00:12:54.443 "data_size": 63488 00:12:54.443 }, 00:12:54.443 { 00:12:54.443 "name": "pt3", 00:12:54.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.443 "is_configured": true, 00:12:54.443 "data_offset": 2048, 00:12:54.443 "data_size": 63488 00:12:54.443 }, 00:12:54.443 { 00:12:54.443 "name": "pt4", 00:12:54.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:54.443 "is_configured": true, 00:12:54.443 "data_offset": 2048, 00:12:54.443 "data_size": 63488 00:12:54.443 } 00:12:54.443 ] 00:12:54.443 }' 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.443 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.011 [2024-11-20 08:46:25.644925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.011 "name": "raid_bdev1", 00:12:55.011 "aliases": [ 00:12:55.011 "bc83ab20-212c-48e5-952c-441df5e1163b" 00:12:55.011 ], 00:12:55.011 "product_name": "Raid Volume", 00:12:55.011 "block_size": 512, 00:12:55.011 "num_blocks": 253952, 00:12:55.011 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:55.011 "assigned_rate_limits": { 00:12:55.011 "rw_ios_per_sec": 0, 00:12:55.011 "rw_mbytes_per_sec": 0, 00:12:55.011 "r_mbytes_per_sec": 0, 00:12:55.011 "w_mbytes_per_sec": 0 00:12:55.011 }, 00:12:55.011 "claimed": false, 00:12:55.011 "zoned": false, 00:12:55.011 "supported_io_types": { 00:12:55.011 "read": true, 00:12:55.011 "write": true, 00:12:55.011 "unmap": true, 00:12:55.011 "flush": true, 00:12:55.011 "reset": true, 00:12:55.011 "nvme_admin": false, 00:12:55.011 "nvme_io": false, 00:12:55.011 "nvme_io_md": false, 00:12:55.011 "write_zeroes": true, 00:12:55.011 "zcopy": false, 00:12:55.011 "get_zone_info": false, 00:12:55.011 "zone_management": false, 00:12:55.011 "zone_append": false, 00:12:55.011 "compare": false, 00:12:55.011 "compare_and_write": false, 00:12:55.011 "abort": false, 00:12:55.011 "seek_hole": false, 00:12:55.011 "seek_data": false, 00:12:55.011 "copy": false, 00:12:55.011 "nvme_iov_md": false 00:12:55.011 }, 00:12:55.011 "memory_domains": [ 00:12:55.011 { 00:12:55.011 "dma_device_id": "system", 00:12:55.011 "dma_device_type": 1 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.011 "dma_device_type": 2 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "system", 00:12:55.011 "dma_device_type": 1 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.011 "dma_device_type": 2 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "system", 00:12:55.011 "dma_device_type": 1 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.011 "dma_device_type": 2 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "system", 00:12:55.011 "dma_device_type": 1 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.011 "dma_device_type": 2 00:12:55.011 } 00:12:55.011 ], 00:12:55.011 "driver_specific": { 00:12:55.011 "raid": { 00:12:55.011 "uuid": "bc83ab20-212c-48e5-952c-441df5e1163b", 00:12:55.011 "strip_size_kb": 64, 00:12:55.011 "state": "online", 00:12:55.011 "raid_level": "raid0", 00:12:55.011 "superblock": true, 00:12:55.011 "num_base_bdevs": 4, 00:12:55.011 "num_base_bdevs_discovered": 4, 00:12:55.011 "num_base_bdevs_operational": 4, 00:12:55.011 "base_bdevs_list": [ 00:12:55.011 { 00:12:55.011 "name": "pt1", 00:12:55.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.011 "is_configured": true, 00:12:55.011 "data_offset": 2048, 00:12:55.011 "data_size": 63488 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "name": "pt2", 00:12:55.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.011 "is_configured": true, 00:12:55.011 "data_offset": 2048, 00:12:55.011 "data_size": 63488 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "name": "pt3", 00:12:55.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.011 "is_configured": true, 00:12:55.011 "data_offset": 2048, 00:12:55.011 "data_size": 63488 00:12:55.011 }, 00:12:55.011 { 00:12:55.011 "name": "pt4", 00:12:55.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.011 "is_configured": true, 00:12:55.011 "data_offset": 2048, 00:12:55.011 "data_size": 63488 00:12:55.011 } 00:12:55.011 ] 00:12:55.011 } 00:12:55.011 } 00:12:55.011 }' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:55.011 pt2 00:12:55.011 pt3 00:12:55.011 pt4' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.011 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.012 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.012 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.270 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.270 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.270 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.270 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:55.270 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.271 08:46:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.271 [2024-11-20 08:46:26.053033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc83ab20-212c-48e5-952c-441df5e1163b '!=' bc83ab20-212c-48e5-952c-441df5e1163b ']' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70810 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70810 ']' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70810 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70810 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.271 killing process with pid 70810 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70810' 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70810 00:12:55.271 [2024-11-20 08:46:26.130869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.271 08:46:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70810 00:12:55.271 [2024-11-20 08:46:26.130978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.271 [2024-11-20 08:46:26.131083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.271 [2024-11-20 08:46:26.131100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:55.839 [2024-11-20 08:46:26.492360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.775 08:46:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:56.775 00:12:56.775 real 0m6.015s 00:12:56.775 user 0m9.117s 00:12:56.775 sys 0m0.865s 00:12:56.775 08:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.775 08:46:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.775 ************************************ 00:12:56.775 END TEST raid_superblock_test 00:12:56.775 ************************************ 00:12:56.775 08:46:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:56.775 08:46:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:56.775 08:46:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.775 08:46:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.775 ************************************ 00:12:56.775 START TEST raid_read_error_test 00:12:56.775 ************************************ 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LwjOrt9ldp 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71082 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71082 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71082 ']' 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.775 08:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.033 [2024-11-20 08:46:27.704781] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:12:57.033 [2024-11-20 08:46:27.704970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:12:57.033 [2024-11-20 08:46:27.891298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.292 [2024-11-20 08:46:28.021575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.643 [2024-11-20 08:46:28.228557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.643 [2024-11-20 08:46:28.228615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 BaseBdev1_malloc 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 true 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 [2024-11-20 08:46:28.762322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:57.903 [2024-11-20 08:46:28.762406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.903 [2024-11-20 08:46:28.762443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:57.903 [2024-11-20 08:46:28.762463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.903 [2024-11-20 08:46:28.765685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.903 [2024-11-20 08:46:28.765753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.903 BaseBdev1 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.903 BaseBdev2_malloc 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.903 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 true 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 [2024-11-20 08:46:28.827225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:58.163 [2024-11-20 08:46:28.827312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.163 [2024-11-20 08:46:28.827340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:58.163 [2024-11-20 08:46:28.827357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.163 [2024-11-20 08:46:28.830260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.163 [2024-11-20 08:46:28.830312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:58.163 BaseBdev2 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 BaseBdev3_malloc 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 true 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 [2024-11-20 08:46:28.902479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:58.163 [2024-11-20 08:46:28.902607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.163 [2024-11-20 08:46:28.902635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:58.163 [2024-11-20 08:46:28.902653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.163 [2024-11-20 08:46:28.905651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.163 [2024-11-20 08:46:28.905699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:58.163 BaseBdev3 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 BaseBdev4_malloc 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 true 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 [2024-11-20 08:46:28.966317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:58.163 [2024-11-20 08:46:28.966405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.163 [2024-11-20 08:46:28.966435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:58.163 [2024-11-20 08:46:28.966468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.163 [2024-11-20 08:46:28.969441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.163 [2024-11-20 08:46:28.969512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:58.163 BaseBdev4 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 [2024-11-20 08:46:28.974516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.163 [2024-11-20 08:46:28.977006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.163 [2024-11-20 08:46:28.977119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.163 [2024-11-20 08:46:28.977279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:58.163 [2024-11-20 08:46:28.977651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:58.163 [2024-11-20 08:46:28.977689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:58.163 [2024-11-20 08:46:28.978034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:58.163 [2024-11-20 08:46:28.978304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:58.163 [2024-11-20 08:46:28.978324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:58.163 [2024-11-20 08:46:28.978574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.163 08:46:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.163 08:46:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.163 08:46:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.163 "name": "raid_bdev1", 00:12:58.163 "uuid": "35212561-fb7f-4ea7-89bf-2a8e32b572d0", 00:12:58.163 "strip_size_kb": 64, 00:12:58.163 "state": "online", 00:12:58.163 "raid_level": "raid0", 00:12:58.163 "superblock": true, 00:12:58.163 "num_base_bdevs": 4, 00:12:58.163 "num_base_bdevs_discovered": 4, 00:12:58.163 "num_base_bdevs_operational": 4, 00:12:58.163 "base_bdevs_list": [ 00:12:58.163 { 00:12:58.163 "name": "BaseBdev1", 00:12:58.163 "uuid": "f5b6fa58-88df-5bb6-927e-7ac23cfd1452", 00:12:58.163 "is_configured": true, 00:12:58.163 "data_offset": 2048, 00:12:58.163 "data_size": 63488 00:12:58.163 }, 00:12:58.163 { 00:12:58.163 "name": "BaseBdev2", 00:12:58.163 "uuid": "82cb5f1a-3670-5977-8b14-bcc00ef8aa5a", 00:12:58.163 "is_configured": true, 00:12:58.163 "data_offset": 2048, 00:12:58.163 "data_size": 63488 00:12:58.163 }, 00:12:58.163 { 00:12:58.163 "name": "BaseBdev3", 00:12:58.163 "uuid": "728d64ea-bc31-5799-925f-e447ef0759c5", 00:12:58.163 "is_configured": true, 00:12:58.163 "data_offset": 2048, 00:12:58.163 "data_size": 63488 00:12:58.163 }, 00:12:58.163 { 00:12:58.163 "name": "BaseBdev4", 00:12:58.163 "uuid": "1cff5f11-4302-580a-9538-122cc3b9c852", 00:12:58.163 "is_configured": true, 00:12:58.163 "data_offset": 2048, 00:12:58.163 "data_size": 63488 00:12:58.163 } 00:12:58.163 ] 00:12:58.163 }' 00:12:58.163 08:46:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.163 08:46:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 08:46:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:58.731 08:46:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:58.731 [2024-11-20 08:46:29.592306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.668 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.927 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.927 "name": "raid_bdev1", 00:12:59.927 "uuid": "35212561-fb7f-4ea7-89bf-2a8e32b572d0", 00:12:59.927 "strip_size_kb": 64, 00:12:59.927 "state": "online", 00:12:59.927 "raid_level": "raid0", 00:12:59.927 "superblock": true, 00:12:59.927 "num_base_bdevs": 4, 00:12:59.927 "num_base_bdevs_discovered": 4, 00:12:59.927 "num_base_bdevs_operational": 4, 00:12:59.927 "base_bdevs_list": [ 00:12:59.927 { 00:12:59.927 "name": "BaseBdev1", 00:12:59.927 "uuid": "f5b6fa58-88df-5bb6-927e-7ac23cfd1452", 00:12:59.927 "is_configured": true, 00:12:59.927 "data_offset": 2048, 00:12:59.927 "data_size": 63488 00:12:59.927 }, 00:12:59.927 { 00:12:59.927 "name": "BaseBdev2", 00:12:59.927 "uuid": "82cb5f1a-3670-5977-8b14-bcc00ef8aa5a", 00:12:59.927 "is_configured": true, 00:12:59.927 "data_offset": 2048, 00:12:59.927 "data_size": 63488 00:12:59.927 }, 00:12:59.927 { 00:12:59.927 "name": "BaseBdev3", 00:12:59.927 "uuid": "728d64ea-bc31-5799-925f-e447ef0759c5", 00:12:59.927 "is_configured": true, 00:12:59.927 "data_offset": 2048, 00:12:59.927 "data_size": 63488 00:12:59.927 }, 00:12:59.927 { 00:12:59.927 "name": "BaseBdev4", 00:12:59.927 "uuid": "1cff5f11-4302-580a-9538-122cc3b9c852", 00:12:59.927 "is_configured": true, 00:12:59.927 "data_offset": 2048, 00:12:59.927 "data_size": 63488 00:12:59.927 } 00:12:59.927 ] 00:12:59.927 }' 00:12:59.927 08:46:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.927 08:46:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.184 08:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.185 [2024-11-20 08:46:31.043990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.185 [2024-11-20 08:46:31.044893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.185 { 00:13:00.185 "results": [ 00:13:00.185 { 00:13:00.185 "job": "raid_bdev1", 00:13:00.185 "core_mask": "0x1", 00:13:00.185 "workload": "randrw", 00:13:00.185 "percentage": 50, 00:13:00.185 "status": "finished", 00:13:00.185 "queue_depth": 1, 00:13:00.185 "io_size": 131072, 00:13:00.185 "runtime": 1.450028, 00:13:00.185 "iops": 10299.801107288962, 00:13:00.185 "mibps": 1287.4751384111203, 00:13:00.185 "io_failed": 1, 00:13:00.185 "io_timeout": 0, 00:13:00.185 "avg_latency_us": 135.479986366071, 00:13:00.185 "min_latency_us": 40.96, 00:13:00.185 "max_latency_us": 1846.9236363636364 00:13:00.185 } 00:13:00.185 ], 00:13:00.185 "core_count": 1 00:13:00.185 } 00:13:00.185 [2024-11-20 08:46:31.048275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.185 [2024-11-20 08:46:31.048411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.185 [2024-11-20 08:46:31.048479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.185 [2024-11-20 08:46:31.048499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71082 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71082 ']' 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71082 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71082 00:13:00.185 killing process with pid 71082 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71082' 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71082 00:13:00.185 [2024-11-20 08:46:31.087060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.185 08:46:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71082 00:13:00.749 [2024-11-20 08:46:31.381788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LwjOrt9ldp 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:13:01.724 ************************************ 00:13:01.724 END TEST raid_read_error_test 00:13:01.724 ************************************ 00:13:01.724 00:13:01.724 real 0m4.909s 00:13:01.724 user 0m6.031s 00:13:01.724 sys 0m0.636s 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.724 08:46:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.724 08:46:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:01.724 08:46:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:01.724 08:46:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.724 08:46:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.724 ************************************ 00:13:01.724 START TEST raid_write_error_test 00:13:01.724 ************************************ 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:01.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LAPzFycRw5 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71229 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71229 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71229 ']' 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.724 08:46:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 [2024-11-20 08:46:32.659105] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:01.982 [2024-11-20 08:46:32.659303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:13:01.982 [2024-11-20 08:46:32.848467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.241 [2024-11-20 08:46:33.008509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.499 [2024-11-20 08:46:33.250399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.499 [2024-11-20 08:46:33.250453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 BaseBdev1_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 true 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 [2024-11-20 08:46:33.746763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:03.067 [2024-11-20 08:46:33.746991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.067 [2024-11-20 08:46:33.747032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:03.067 [2024-11-20 08:46:33.747052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.067 [2024-11-20 08:46:33.749883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.067 [2024-11-20 08:46:33.749937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.067 BaseBdev1 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 BaseBdev2_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 true 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 [2024-11-20 08:46:33.807542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:03.067 [2024-11-20 08:46:33.807612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.067 [2024-11-20 08:46:33.807639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:03.067 [2024-11-20 08:46:33.807659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.067 [2024-11-20 08:46:33.810488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.067 [2024-11-20 08:46:33.810539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:03.067 BaseBdev2 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 BaseBdev3_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 true 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 [2024-11-20 08:46:33.886396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:03.067 [2024-11-20 08:46:33.886464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.067 [2024-11-20 08:46:33.886492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:03.067 [2024-11-20 08:46:33.886509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.067 [2024-11-20 08:46:33.889324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.067 [2024-11-20 08:46:33.889375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:03.067 BaseBdev3 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 BaseBdev4_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 true 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 [2024-11-20 08:46:33.942802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:03.067 [2024-11-20 08:46:33.942993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.067 [2024-11-20 08:46:33.943031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:03.067 [2024-11-20 08:46:33.943050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.067 [2024-11-20 08:46:33.945799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.067 [2024-11-20 08:46:33.945853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:03.067 BaseBdev4 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.067 [2024-11-20 08:46:33.950854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.067 [2024-11-20 08:46:33.953528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.067 [2024-11-20 08:46:33.953765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.067 [2024-11-20 08:46:33.953993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.067 [2024-11-20 08:46:33.954332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:03.067 [2024-11-20 08:46:33.954362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:03.067 [2024-11-20 08:46:33.954661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:03.067 [2024-11-20 08:46:33.954893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:03.067 [2024-11-20 08:46:33.954919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:03.067 [2024-11-20 08:46:33.955323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.067 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.068 08:46:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.068 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.068 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.068 08:46:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.326 08:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.326 "name": "raid_bdev1", 00:13:03.326 "uuid": "25e50707-587c-4719-81c8-a50b345ddbcf", 00:13:03.326 "strip_size_kb": 64, 00:13:03.326 "state": "online", 00:13:03.326 "raid_level": "raid0", 00:13:03.326 "superblock": true, 00:13:03.326 "num_base_bdevs": 4, 00:13:03.326 "num_base_bdevs_discovered": 4, 00:13:03.326 "num_base_bdevs_operational": 4, 00:13:03.326 "base_bdevs_list": [ 00:13:03.326 { 00:13:03.326 "name": "BaseBdev1", 00:13:03.326 "uuid": "3b198b1c-7a33-5201-959a-b4088f9a264b", 00:13:03.326 "is_configured": true, 00:13:03.326 "data_offset": 2048, 00:13:03.326 "data_size": 63488 00:13:03.326 }, 00:13:03.326 { 00:13:03.326 "name": "BaseBdev2", 00:13:03.326 "uuid": "81cc46fc-1400-5ea2-a3d7-5eeec4f5ca8b", 00:13:03.326 "is_configured": true, 00:13:03.326 "data_offset": 2048, 00:13:03.326 "data_size": 63488 00:13:03.326 }, 00:13:03.326 { 00:13:03.326 "name": "BaseBdev3", 00:13:03.326 "uuid": "e6215c65-fdd6-5baa-8d68-dded7a9f82c9", 00:13:03.326 "is_configured": true, 00:13:03.327 "data_offset": 2048, 00:13:03.327 "data_size": 63488 00:13:03.327 }, 00:13:03.327 { 00:13:03.327 "name": "BaseBdev4", 00:13:03.327 "uuid": "7430b31e-7b94-5f04-914b-ed07f4062bec", 00:13:03.327 "is_configured": true, 00:13:03.327 "data_offset": 2048, 00:13:03.327 "data_size": 63488 00:13:03.327 } 00:13:03.327 ] 00:13:03.327 }' 00:13:03.327 08:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.327 08:46:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.585 08:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:03.585 08:46:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:03.843 [2024-11-20 08:46:34.592723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.778 "name": "raid_bdev1", 00:13:04.778 "uuid": "25e50707-587c-4719-81c8-a50b345ddbcf", 00:13:04.778 "strip_size_kb": 64, 00:13:04.778 "state": "online", 00:13:04.778 "raid_level": "raid0", 00:13:04.778 "superblock": true, 00:13:04.778 "num_base_bdevs": 4, 00:13:04.778 "num_base_bdevs_discovered": 4, 00:13:04.778 "num_base_bdevs_operational": 4, 00:13:04.778 "base_bdevs_list": [ 00:13:04.778 { 00:13:04.778 "name": "BaseBdev1", 00:13:04.778 "uuid": "3b198b1c-7a33-5201-959a-b4088f9a264b", 00:13:04.778 "is_configured": true, 00:13:04.778 "data_offset": 2048, 00:13:04.778 "data_size": 63488 00:13:04.778 }, 00:13:04.778 { 00:13:04.778 "name": "BaseBdev2", 00:13:04.778 "uuid": "81cc46fc-1400-5ea2-a3d7-5eeec4f5ca8b", 00:13:04.778 "is_configured": true, 00:13:04.778 "data_offset": 2048, 00:13:04.778 "data_size": 63488 00:13:04.778 }, 00:13:04.778 { 00:13:04.778 "name": "BaseBdev3", 00:13:04.778 "uuid": "e6215c65-fdd6-5baa-8d68-dded7a9f82c9", 00:13:04.778 "is_configured": true, 00:13:04.778 "data_offset": 2048, 00:13:04.778 "data_size": 63488 00:13:04.778 }, 00:13:04.778 { 00:13:04.778 "name": "BaseBdev4", 00:13:04.778 "uuid": "7430b31e-7b94-5f04-914b-ed07f4062bec", 00:13:04.778 "is_configured": true, 00:13:04.778 "data_offset": 2048, 00:13:04.778 "data_size": 63488 00:13:04.778 } 00:13:04.778 ] 00:13:04.778 }' 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.778 08:46:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 08:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.346 [2024-11-20 08:46:36.015862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.346 [2024-11-20 08:46:36.016038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.346 [2024-11-20 08:46:36.019561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.346 [2024-11-20 08:46:36.019773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.346 [2024-11-20 08:46:36.019962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.346 [2024-11-20 08:46:36.020128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:05.346 { 00:13:05.346 "results": [ 00:13:05.346 { 00:13:05.346 "job": "raid_bdev1", 00:13:05.346 "core_mask": "0x1", 00:13:05.346 "workload": "randrw", 00:13:05.346 "percentage": 50, 00:13:05.346 "status": "finished", 00:13:05.346 "queue_depth": 1, 00:13:05.346 "io_size": 131072, 00:13:05.346 "runtime": 1.420791, 00:13:05.346 "iops": 10698.265965930246, 00:13:05.346 "mibps": 1337.2832457412808, 00:13:05.346 "io_failed": 1, 00:13:05.346 "io_timeout": 0, 00:13:05.346 "avg_latency_us": 130.65692759447643, 00:13:05.346 "min_latency_us": 40.261818181818185, 00:13:05.346 "max_latency_us": 1869.2654545454545 00:13:05.346 } 00:13:05.346 ], 00:13:05.346 "core_count": 1 00:13:05.346 } 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71229 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71229 ']' 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71229 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71229 00:13:05.346 killing process with pid 71229 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71229' 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71229 00:13:05.346 [2024-11-20 08:46:36.056528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.346 08:46:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71229 00:13:05.604 [2024-11-20 08:46:36.354162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LAPzFycRw5 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:06.561 00:13:06.561 real 0m4.933s 00:13:06.561 user 0m6.089s 00:13:06.561 sys 0m0.627s 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.561 ************************************ 00:13:06.561 END TEST raid_write_error_test 00:13:06.561 ************************************ 00:13:06.561 08:46:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 08:46:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:06.820 08:46:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:06.820 08:46:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:06.820 08:46:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.820 08:46:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 ************************************ 00:13:06.820 START TEST raid_state_function_test 00:13:06.820 ************************************ 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:06.820 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:06.821 Process raid pid: 71378 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71378 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71378' 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71378 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71378 ']' 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.821 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.821 [2024-11-20 08:46:37.645774] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:06.821 [2024-11-20 08:46:37.646250] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.079 [2024-11-20 08:46:37.838047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.079 [2024-11-20 08:46:37.973014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.337 [2024-11-20 08:46:38.179591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.337 [2024-11-20 08:46:38.179652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.903 [2024-11-20 08:46:38.737117] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:07.903 [2024-11-20 08:46:38.737200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:07.903 [2024-11-20 08:46:38.737219] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.903 [2024-11-20 08:46:38.737237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.903 [2024-11-20 08:46:38.737256] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.903 [2024-11-20 08:46:38.737271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.903 [2024-11-20 08:46:38.737280] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:07.903 [2024-11-20 08:46:38.737294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.903 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.904 "name": "Existed_Raid", 00:13:07.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.904 "strip_size_kb": 64, 00:13:07.904 "state": "configuring", 00:13:07.904 "raid_level": "concat", 00:13:07.904 "superblock": false, 00:13:07.904 "num_base_bdevs": 4, 00:13:07.904 "num_base_bdevs_discovered": 0, 00:13:07.904 "num_base_bdevs_operational": 4, 00:13:07.904 "base_bdevs_list": [ 00:13:07.904 { 00:13:07.904 "name": "BaseBdev1", 00:13:07.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.904 "is_configured": false, 00:13:07.904 "data_offset": 0, 00:13:07.904 "data_size": 0 00:13:07.904 }, 00:13:07.904 { 00:13:07.904 "name": "BaseBdev2", 00:13:07.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.904 "is_configured": false, 00:13:07.904 "data_offset": 0, 00:13:07.904 "data_size": 0 00:13:07.904 }, 00:13:07.904 { 00:13:07.904 "name": "BaseBdev3", 00:13:07.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.904 "is_configured": false, 00:13:07.904 "data_offset": 0, 00:13:07.904 "data_size": 0 00:13:07.904 }, 00:13:07.904 { 00:13:07.904 "name": "BaseBdev4", 00:13:07.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.904 "is_configured": false, 00:13:07.904 "data_offset": 0, 00:13:07.904 "data_size": 0 00:13:07.904 } 00:13:07.904 ] 00:13:07.904 }' 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.904 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.472 [2024-11-20 08:46:39.293368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.472 [2024-11-20 08:46:39.293455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.472 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.472 [2024-11-20 08:46:39.301236] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:08.472 [2024-11-20 08:46:39.301303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:08.472 [2024-11-20 08:46:39.301339] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:08.472 [2024-11-20 08:46:39.301363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:08.472 [2024-11-20 08:46:39.301377] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:08.472 [2024-11-20 08:46:39.301397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:08.473 [2024-11-20 08:46:39.301409] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:08.473 [2024-11-20 08:46:39.301427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.473 [2024-11-20 08:46:39.355025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.473 BaseBdev1 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.473 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.473 [ 00:13:08.473 { 00:13:08.473 "name": "BaseBdev1", 00:13:08.473 "aliases": [ 00:13:08.473 "b0672e74-1f9f-4535-8182-4ede9dedebd6" 00:13:08.473 ], 00:13:08.473 "product_name": "Malloc disk", 00:13:08.473 "block_size": 512, 00:13:08.473 "num_blocks": 65536, 00:13:08.473 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:08.473 "assigned_rate_limits": { 00:13:08.473 "rw_ios_per_sec": 0, 00:13:08.473 "rw_mbytes_per_sec": 0, 00:13:08.473 "r_mbytes_per_sec": 0, 00:13:08.473 "w_mbytes_per_sec": 0 00:13:08.473 }, 00:13:08.473 "claimed": true, 00:13:08.473 "claim_type": "exclusive_write", 00:13:08.473 "zoned": false, 00:13:08.473 "supported_io_types": { 00:13:08.473 "read": true, 00:13:08.473 "write": true, 00:13:08.473 "unmap": true, 00:13:08.473 "flush": true, 00:13:08.473 "reset": true, 00:13:08.473 "nvme_admin": false, 00:13:08.473 "nvme_io": false, 00:13:08.473 "nvme_io_md": false, 00:13:08.473 "write_zeroes": true, 00:13:08.473 "zcopy": true, 00:13:08.736 "get_zone_info": false, 00:13:08.736 "zone_management": false, 00:13:08.736 "zone_append": false, 00:13:08.736 "compare": false, 00:13:08.736 "compare_and_write": false, 00:13:08.736 "abort": true, 00:13:08.736 "seek_hole": false, 00:13:08.736 "seek_data": false, 00:13:08.736 "copy": true, 00:13:08.736 "nvme_iov_md": false 00:13:08.736 }, 00:13:08.736 "memory_domains": [ 00:13:08.736 { 00:13:08.736 "dma_device_id": "system", 00:13:08.736 "dma_device_type": 1 00:13:08.736 }, 00:13:08.736 { 00:13:08.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.736 "dma_device_type": 2 00:13:08.736 } 00:13:08.736 ], 00:13:08.736 "driver_specific": {} 00:13:08.736 } 00:13:08.736 ] 00:13:08.736 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.736 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.737 "name": "Existed_Raid", 00:13:08.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.737 "strip_size_kb": 64, 00:13:08.737 "state": "configuring", 00:13:08.737 "raid_level": "concat", 00:13:08.737 "superblock": false, 00:13:08.737 "num_base_bdevs": 4, 00:13:08.737 "num_base_bdevs_discovered": 1, 00:13:08.737 "num_base_bdevs_operational": 4, 00:13:08.737 "base_bdevs_list": [ 00:13:08.737 { 00:13:08.737 "name": "BaseBdev1", 00:13:08.737 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:08.737 "is_configured": true, 00:13:08.737 "data_offset": 0, 00:13:08.737 "data_size": 65536 00:13:08.737 }, 00:13:08.737 { 00:13:08.737 "name": "BaseBdev2", 00:13:08.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.737 "is_configured": false, 00:13:08.737 "data_offset": 0, 00:13:08.737 "data_size": 0 00:13:08.737 }, 00:13:08.737 { 00:13:08.737 "name": "BaseBdev3", 00:13:08.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.737 "is_configured": false, 00:13:08.737 "data_offset": 0, 00:13:08.737 "data_size": 0 00:13:08.737 }, 00:13:08.737 { 00:13:08.737 "name": "BaseBdev4", 00:13:08.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.737 "is_configured": false, 00:13:08.737 "data_offset": 0, 00:13:08.737 "data_size": 0 00:13:08.737 } 00:13:08.737 ] 00:13:08.737 }' 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.737 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.303 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.304 [2024-11-20 08:46:39.915287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.304 [2024-11-20 08:46:39.915372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.304 [2024-11-20 08:46:39.923339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.304 [2024-11-20 08:46:39.925783] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.304 [2024-11-20 08:46:39.925840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.304 [2024-11-20 08:46:39.925857] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.304 [2024-11-20 08:46:39.925876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.304 [2024-11-20 08:46:39.925886] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:09.304 [2024-11-20 08:46:39.925909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.304 "name": "Existed_Raid", 00:13:09.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.304 "strip_size_kb": 64, 00:13:09.304 "state": "configuring", 00:13:09.304 "raid_level": "concat", 00:13:09.304 "superblock": false, 00:13:09.304 "num_base_bdevs": 4, 00:13:09.304 "num_base_bdevs_discovered": 1, 00:13:09.304 "num_base_bdevs_operational": 4, 00:13:09.304 "base_bdevs_list": [ 00:13:09.304 { 00:13:09.304 "name": "BaseBdev1", 00:13:09.304 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:09.304 "is_configured": true, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 65536 00:13:09.304 }, 00:13:09.304 { 00:13:09.304 "name": "BaseBdev2", 00:13:09.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.304 "is_configured": false, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 0 00:13:09.304 }, 00:13:09.304 { 00:13:09.304 "name": "BaseBdev3", 00:13:09.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.304 "is_configured": false, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 0 00:13:09.304 }, 00:13:09.304 { 00:13:09.304 "name": "BaseBdev4", 00:13:09.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.304 "is_configured": false, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 0 00:13:09.304 } 00:13:09.304 ] 00:13:09.304 }' 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.304 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.561 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:09.561 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.561 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.819 [2024-11-20 08:46:40.498241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.819 BaseBdev2 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.819 [ 00:13:09.819 { 00:13:09.819 "name": "BaseBdev2", 00:13:09.819 "aliases": [ 00:13:09.819 "2981a9cf-9be6-46c7-96b4-c02ea81c2687" 00:13:09.819 ], 00:13:09.819 "product_name": "Malloc disk", 00:13:09.819 "block_size": 512, 00:13:09.819 "num_blocks": 65536, 00:13:09.819 "uuid": "2981a9cf-9be6-46c7-96b4-c02ea81c2687", 00:13:09.819 "assigned_rate_limits": { 00:13:09.819 "rw_ios_per_sec": 0, 00:13:09.819 "rw_mbytes_per_sec": 0, 00:13:09.819 "r_mbytes_per_sec": 0, 00:13:09.819 "w_mbytes_per_sec": 0 00:13:09.819 }, 00:13:09.819 "claimed": true, 00:13:09.819 "claim_type": "exclusive_write", 00:13:09.819 "zoned": false, 00:13:09.819 "supported_io_types": { 00:13:09.819 "read": true, 00:13:09.819 "write": true, 00:13:09.819 "unmap": true, 00:13:09.819 "flush": true, 00:13:09.819 "reset": true, 00:13:09.819 "nvme_admin": false, 00:13:09.819 "nvme_io": false, 00:13:09.819 "nvme_io_md": false, 00:13:09.819 "write_zeroes": true, 00:13:09.819 "zcopy": true, 00:13:09.819 "get_zone_info": false, 00:13:09.819 "zone_management": false, 00:13:09.819 "zone_append": false, 00:13:09.819 "compare": false, 00:13:09.819 "compare_and_write": false, 00:13:09.819 "abort": true, 00:13:09.819 "seek_hole": false, 00:13:09.819 "seek_data": false, 00:13:09.819 "copy": true, 00:13:09.819 "nvme_iov_md": false 00:13:09.819 }, 00:13:09.819 "memory_domains": [ 00:13:09.819 { 00:13:09.819 "dma_device_id": "system", 00:13:09.819 "dma_device_type": 1 00:13:09.819 }, 00:13:09.819 { 00:13:09.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.819 "dma_device_type": 2 00:13:09.819 } 00:13:09.819 ], 00:13:09.819 "driver_specific": {} 00:13:09.819 } 00:13:09.819 ] 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.819 "name": "Existed_Raid", 00:13:09.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.819 "strip_size_kb": 64, 00:13:09.819 "state": "configuring", 00:13:09.819 "raid_level": "concat", 00:13:09.819 "superblock": false, 00:13:09.819 "num_base_bdevs": 4, 00:13:09.819 "num_base_bdevs_discovered": 2, 00:13:09.819 "num_base_bdevs_operational": 4, 00:13:09.819 "base_bdevs_list": [ 00:13:09.819 { 00:13:09.819 "name": "BaseBdev1", 00:13:09.819 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:09.819 "is_configured": true, 00:13:09.819 "data_offset": 0, 00:13:09.819 "data_size": 65536 00:13:09.819 }, 00:13:09.819 { 00:13:09.819 "name": "BaseBdev2", 00:13:09.819 "uuid": "2981a9cf-9be6-46c7-96b4-c02ea81c2687", 00:13:09.819 "is_configured": true, 00:13:09.819 "data_offset": 0, 00:13:09.819 "data_size": 65536 00:13:09.819 }, 00:13:09.819 { 00:13:09.819 "name": "BaseBdev3", 00:13:09.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.819 "is_configured": false, 00:13:09.819 "data_offset": 0, 00:13:09.819 "data_size": 0 00:13:09.819 }, 00:13:09.819 { 00:13:09.819 "name": "BaseBdev4", 00:13:09.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.819 "is_configured": false, 00:13:09.819 "data_offset": 0, 00:13:09.819 "data_size": 0 00:13:09.819 } 00:13:09.819 ] 00:13:09.819 }' 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.819 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.385 [2024-11-20 08:46:41.143679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.385 BaseBdev3 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.385 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.385 [ 00:13:10.385 { 00:13:10.385 "name": "BaseBdev3", 00:13:10.385 "aliases": [ 00:13:10.385 "e12b767e-3d78-4523-b5f0-ec0525905246" 00:13:10.385 ], 00:13:10.385 "product_name": "Malloc disk", 00:13:10.385 "block_size": 512, 00:13:10.385 "num_blocks": 65536, 00:13:10.385 "uuid": "e12b767e-3d78-4523-b5f0-ec0525905246", 00:13:10.385 "assigned_rate_limits": { 00:13:10.385 "rw_ios_per_sec": 0, 00:13:10.385 "rw_mbytes_per_sec": 0, 00:13:10.385 "r_mbytes_per_sec": 0, 00:13:10.385 "w_mbytes_per_sec": 0 00:13:10.385 }, 00:13:10.385 "claimed": true, 00:13:10.385 "claim_type": "exclusive_write", 00:13:10.385 "zoned": false, 00:13:10.385 "supported_io_types": { 00:13:10.386 "read": true, 00:13:10.386 "write": true, 00:13:10.386 "unmap": true, 00:13:10.386 "flush": true, 00:13:10.386 "reset": true, 00:13:10.386 "nvme_admin": false, 00:13:10.386 "nvme_io": false, 00:13:10.386 "nvme_io_md": false, 00:13:10.386 "write_zeroes": true, 00:13:10.386 "zcopy": true, 00:13:10.386 "get_zone_info": false, 00:13:10.386 "zone_management": false, 00:13:10.386 "zone_append": false, 00:13:10.386 "compare": false, 00:13:10.386 "compare_and_write": false, 00:13:10.386 "abort": true, 00:13:10.386 "seek_hole": false, 00:13:10.386 "seek_data": false, 00:13:10.386 "copy": true, 00:13:10.386 "nvme_iov_md": false 00:13:10.386 }, 00:13:10.386 "memory_domains": [ 00:13:10.386 { 00:13:10.386 "dma_device_id": "system", 00:13:10.386 "dma_device_type": 1 00:13:10.386 }, 00:13:10.386 { 00:13:10.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.386 "dma_device_type": 2 00:13:10.386 } 00:13:10.386 ], 00:13:10.386 "driver_specific": {} 00:13:10.386 } 00:13:10.386 ] 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.386 "name": "Existed_Raid", 00:13:10.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.386 "strip_size_kb": 64, 00:13:10.386 "state": "configuring", 00:13:10.386 "raid_level": "concat", 00:13:10.386 "superblock": false, 00:13:10.386 "num_base_bdevs": 4, 00:13:10.386 "num_base_bdevs_discovered": 3, 00:13:10.386 "num_base_bdevs_operational": 4, 00:13:10.386 "base_bdevs_list": [ 00:13:10.386 { 00:13:10.386 "name": "BaseBdev1", 00:13:10.386 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:10.386 "is_configured": true, 00:13:10.386 "data_offset": 0, 00:13:10.386 "data_size": 65536 00:13:10.386 }, 00:13:10.386 { 00:13:10.386 "name": "BaseBdev2", 00:13:10.386 "uuid": "2981a9cf-9be6-46c7-96b4-c02ea81c2687", 00:13:10.386 "is_configured": true, 00:13:10.386 "data_offset": 0, 00:13:10.386 "data_size": 65536 00:13:10.386 }, 00:13:10.386 { 00:13:10.386 "name": "BaseBdev3", 00:13:10.386 "uuid": "e12b767e-3d78-4523-b5f0-ec0525905246", 00:13:10.386 "is_configured": true, 00:13:10.386 "data_offset": 0, 00:13:10.386 "data_size": 65536 00:13:10.386 }, 00:13:10.386 { 00:13:10.386 "name": "BaseBdev4", 00:13:10.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.386 "is_configured": false, 00:13:10.386 "data_offset": 0, 00:13:10.386 "data_size": 0 00:13:10.386 } 00:13:10.386 ] 00:13:10.386 }' 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.386 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.970 [2024-11-20 08:46:41.765230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:10.970 [2024-11-20 08:46:41.765588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:10.970 [2024-11-20 08:46:41.765620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:10.970 [2024-11-20 08:46:41.766028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:10.970 [2024-11-20 08:46:41.766308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:10.970 [2024-11-20 08:46:41.766339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:10.970 [2024-11-20 08:46:41.766780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.970 BaseBdev4 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.970 [ 00:13:10.970 { 00:13:10.970 "name": "BaseBdev4", 00:13:10.970 "aliases": [ 00:13:10.970 "a4252f51-7c32-4986-b651-16e1e26b8008" 00:13:10.970 ], 00:13:10.970 "product_name": "Malloc disk", 00:13:10.970 "block_size": 512, 00:13:10.970 "num_blocks": 65536, 00:13:10.970 "uuid": "a4252f51-7c32-4986-b651-16e1e26b8008", 00:13:10.970 "assigned_rate_limits": { 00:13:10.970 "rw_ios_per_sec": 0, 00:13:10.970 "rw_mbytes_per_sec": 0, 00:13:10.970 "r_mbytes_per_sec": 0, 00:13:10.970 "w_mbytes_per_sec": 0 00:13:10.970 }, 00:13:10.970 "claimed": true, 00:13:10.970 "claim_type": "exclusive_write", 00:13:10.970 "zoned": false, 00:13:10.970 "supported_io_types": { 00:13:10.970 "read": true, 00:13:10.970 "write": true, 00:13:10.970 "unmap": true, 00:13:10.970 "flush": true, 00:13:10.970 "reset": true, 00:13:10.970 "nvme_admin": false, 00:13:10.970 "nvme_io": false, 00:13:10.970 "nvme_io_md": false, 00:13:10.970 "write_zeroes": true, 00:13:10.970 "zcopy": true, 00:13:10.970 "get_zone_info": false, 00:13:10.970 "zone_management": false, 00:13:10.970 "zone_append": false, 00:13:10.970 "compare": false, 00:13:10.970 "compare_and_write": false, 00:13:10.970 "abort": true, 00:13:10.970 "seek_hole": false, 00:13:10.970 "seek_data": false, 00:13:10.970 "copy": true, 00:13:10.970 "nvme_iov_md": false 00:13:10.970 }, 00:13:10.970 "memory_domains": [ 00:13:10.970 { 00:13:10.970 "dma_device_id": "system", 00:13:10.970 "dma_device_type": 1 00:13:10.970 }, 00:13:10.970 { 00:13:10.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.970 "dma_device_type": 2 00:13:10.970 } 00:13:10.970 ], 00:13:10.970 "driver_specific": {} 00:13:10.970 } 00:13:10.970 ] 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.970 "name": "Existed_Raid", 00:13:10.970 "uuid": "2dca6193-e288-441c-9e3c-8c27778f19af", 00:13:10.970 "strip_size_kb": 64, 00:13:10.970 "state": "online", 00:13:10.970 "raid_level": "concat", 00:13:10.970 "superblock": false, 00:13:10.970 "num_base_bdevs": 4, 00:13:10.970 "num_base_bdevs_discovered": 4, 00:13:10.970 "num_base_bdevs_operational": 4, 00:13:10.970 "base_bdevs_list": [ 00:13:10.970 { 00:13:10.970 "name": "BaseBdev1", 00:13:10.970 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:10.970 "is_configured": true, 00:13:10.970 "data_offset": 0, 00:13:10.970 "data_size": 65536 00:13:10.970 }, 00:13:10.970 { 00:13:10.970 "name": "BaseBdev2", 00:13:10.970 "uuid": "2981a9cf-9be6-46c7-96b4-c02ea81c2687", 00:13:10.970 "is_configured": true, 00:13:10.970 "data_offset": 0, 00:13:10.970 "data_size": 65536 00:13:10.970 }, 00:13:10.970 { 00:13:10.970 "name": "BaseBdev3", 00:13:10.970 "uuid": "e12b767e-3d78-4523-b5f0-ec0525905246", 00:13:10.970 "is_configured": true, 00:13:10.970 "data_offset": 0, 00:13:10.970 "data_size": 65536 00:13:10.970 }, 00:13:10.970 { 00:13:10.970 "name": "BaseBdev4", 00:13:10.970 "uuid": "a4252f51-7c32-4986-b651-16e1e26b8008", 00:13:10.970 "is_configured": true, 00:13:10.970 "data_offset": 0, 00:13:10.970 "data_size": 65536 00:13:10.970 } 00:13:10.970 ] 00:13:10.970 }' 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.970 08:46:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.536 [2024-11-20 08:46:42.361888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.536 "name": "Existed_Raid", 00:13:11.536 "aliases": [ 00:13:11.536 "2dca6193-e288-441c-9e3c-8c27778f19af" 00:13:11.536 ], 00:13:11.536 "product_name": "Raid Volume", 00:13:11.536 "block_size": 512, 00:13:11.536 "num_blocks": 262144, 00:13:11.536 "uuid": "2dca6193-e288-441c-9e3c-8c27778f19af", 00:13:11.536 "assigned_rate_limits": { 00:13:11.536 "rw_ios_per_sec": 0, 00:13:11.536 "rw_mbytes_per_sec": 0, 00:13:11.536 "r_mbytes_per_sec": 0, 00:13:11.536 "w_mbytes_per_sec": 0 00:13:11.536 }, 00:13:11.536 "claimed": false, 00:13:11.536 "zoned": false, 00:13:11.536 "supported_io_types": { 00:13:11.536 "read": true, 00:13:11.536 "write": true, 00:13:11.536 "unmap": true, 00:13:11.536 "flush": true, 00:13:11.536 "reset": true, 00:13:11.536 "nvme_admin": false, 00:13:11.536 "nvme_io": false, 00:13:11.536 "nvme_io_md": false, 00:13:11.536 "write_zeroes": true, 00:13:11.536 "zcopy": false, 00:13:11.536 "get_zone_info": false, 00:13:11.536 "zone_management": false, 00:13:11.536 "zone_append": false, 00:13:11.536 "compare": false, 00:13:11.536 "compare_and_write": false, 00:13:11.536 "abort": false, 00:13:11.536 "seek_hole": false, 00:13:11.536 "seek_data": false, 00:13:11.536 "copy": false, 00:13:11.536 "nvme_iov_md": false 00:13:11.536 }, 00:13:11.536 "memory_domains": [ 00:13:11.536 { 00:13:11.536 "dma_device_id": "system", 00:13:11.536 "dma_device_type": 1 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.536 "dma_device_type": 2 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "system", 00:13:11.536 "dma_device_type": 1 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.536 "dma_device_type": 2 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "system", 00:13:11.536 "dma_device_type": 1 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.536 "dma_device_type": 2 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "system", 00:13:11.536 "dma_device_type": 1 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.536 "dma_device_type": 2 00:13:11.536 } 00:13:11.536 ], 00:13:11.536 "driver_specific": { 00:13:11.536 "raid": { 00:13:11.536 "uuid": "2dca6193-e288-441c-9e3c-8c27778f19af", 00:13:11.536 "strip_size_kb": 64, 00:13:11.536 "state": "online", 00:13:11.536 "raid_level": "concat", 00:13:11.536 "superblock": false, 00:13:11.536 "num_base_bdevs": 4, 00:13:11.536 "num_base_bdevs_discovered": 4, 00:13:11.536 "num_base_bdevs_operational": 4, 00:13:11.536 "base_bdevs_list": [ 00:13:11.536 { 00:13:11.536 "name": "BaseBdev1", 00:13:11.536 "uuid": "b0672e74-1f9f-4535-8182-4ede9dedebd6", 00:13:11.536 "is_configured": true, 00:13:11.536 "data_offset": 0, 00:13:11.536 "data_size": 65536 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "name": "BaseBdev2", 00:13:11.536 "uuid": "2981a9cf-9be6-46c7-96b4-c02ea81c2687", 00:13:11.536 "is_configured": true, 00:13:11.536 "data_offset": 0, 00:13:11.536 "data_size": 65536 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "name": "BaseBdev3", 00:13:11.536 "uuid": "e12b767e-3d78-4523-b5f0-ec0525905246", 00:13:11.536 "is_configured": true, 00:13:11.536 "data_offset": 0, 00:13:11.536 "data_size": 65536 00:13:11.536 }, 00:13:11.536 { 00:13:11.536 "name": "BaseBdev4", 00:13:11.536 "uuid": "a4252f51-7c32-4986-b651-16e1e26b8008", 00:13:11.536 "is_configured": true, 00:13:11.536 "data_offset": 0, 00:13:11.536 "data_size": 65536 00:13:11.536 } 00:13:11.536 ] 00:13:11.536 } 00:13:11.536 } 00:13:11.536 }' 00:13:11.536 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:11.794 BaseBdev2 00:13:11.794 BaseBdev3 00:13:11.794 BaseBdev4' 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.794 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.795 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.053 [2024-11-20 08:46:42.757677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.053 [2024-11-20 08:46:42.757726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.053 [2024-11-20 08:46:42.757791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.053 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.053 "name": "Existed_Raid", 00:13:12.053 "uuid": "2dca6193-e288-441c-9e3c-8c27778f19af", 00:13:12.053 "strip_size_kb": 64, 00:13:12.053 "state": "offline", 00:13:12.053 "raid_level": "concat", 00:13:12.053 "superblock": false, 00:13:12.053 "num_base_bdevs": 4, 00:13:12.053 "num_base_bdevs_discovered": 3, 00:13:12.053 "num_base_bdevs_operational": 3, 00:13:12.053 "base_bdevs_list": [ 00:13:12.053 { 00:13:12.053 "name": null, 00:13:12.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.053 "is_configured": false, 00:13:12.053 "data_offset": 0, 00:13:12.053 "data_size": 65536 00:13:12.053 }, 00:13:12.053 { 00:13:12.053 "name": "BaseBdev2", 00:13:12.053 "uuid": "2981a9cf-9be6-46c7-96b4-c02ea81c2687", 00:13:12.053 "is_configured": true, 00:13:12.053 "data_offset": 0, 00:13:12.053 "data_size": 65536 00:13:12.053 }, 00:13:12.053 { 00:13:12.053 "name": "BaseBdev3", 00:13:12.054 "uuid": "e12b767e-3d78-4523-b5f0-ec0525905246", 00:13:12.054 "is_configured": true, 00:13:12.054 "data_offset": 0, 00:13:12.054 "data_size": 65536 00:13:12.054 }, 00:13:12.054 { 00:13:12.054 "name": "BaseBdev4", 00:13:12.054 "uuid": "a4252f51-7c32-4986-b651-16e1e26b8008", 00:13:12.054 "is_configured": true, 00:13:12.054 "data_offset": 0, 00:13:12.054 "data_size": 65536 00:13:12.054 } 00:13:12.054 ] 00:13:12.054 }' 00:13:12.054 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.054 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.620 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.620 [2024-11-20 08:46:43.448749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.879 [2024-11-20 08:46:43.597608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.879 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.879 [2024-11-20 08:46:43.748793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:12.879 [2024-11-20 08:46:43.748856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.138 BaseBdev2 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.138 [ 00:13:13.138 { 00:13:13.138 "name": "BaseBdev2", 00:13:13.138 "aliases": [ 00:13:13.138 "9f9906fe-30cf-40eb-9e20-a976603eff6e" 00:13:13.138 ], 00:13:13.138 "product_name": "Malloc disk", 00:13:13.138 "block_size": 512, 00:13:13.138 "num_blocks": 65536, 00:13:13.138 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:13.138 "assigned_rate_limits": { 00:13:13.138 "rw_ios_per_sec": 0, 00:13:13.138 "rw_mbytes_per_sec": 0, 00:13:13.138 "r_mbytes_per_sec": 0, 00:13:13.138 "w_mbytes_per_sec": 0 00:13:13.138 }, 00:13:13.138 "claimed": false, 00:13:13.138 "zoned": false, 00:13:13.138 "supported_io_types": { 00:13:13.138 "read": true, 00:13:13.138 "write": true, 00:13:13.138 "unmap": true, 00:13:13.138 "flush": true, 00:13:13.138 "reset": true, 00:13:13.138 "nvme_admin": false, 00:13:13.138 "nvme_io": false, 00:13:13.138 "nvme_io_md": false, 00:13:13.138 "write_zeroes": true, 00:13:13.138 "zcopy": true, 00:13:13.138 "get_zone_info": false, 00:13:13.138 "zone_management": false, 00:13:13.138 "zone_append": false, 00:13:13.138 "compare": false, 00:13:13.138 "compare_and_write": false, 00:13:13.138 "abort": true, 00:13:13.138 "seek_hole": false, 00:13:13.138 "seek_data": false, 00:13:13.138 "copy": true, 00:13:13.138 "nvme_iov_md": false 00:13:13.138 }, 00:13:13.138 "memory_domains": [ 00:13:13.138 { 00:13:13.138 "dma_device_id": "system", 00:13:13.138 "dma_device_type": 1 00:13:13.138 }, 00:13:13.138 { 00:13:13.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.138 "dma_device_type": 2 00:13:13.138 } 00:13:13.138 ], 00:13:13.138 "driver_specific": {} 00:13:13.138 } 00:13:13.138 ] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.138 08:46:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.138 BaseBdev3 00:13:13.138 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.138 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:13.138 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:13.138 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.139 [ 00:13:13.139 { 00:13:13.139 "name": "BaseBdev3", 00:13:13.139 "aliases": [ 00:13:13.139 "021c0132-0d8a-49ad-bafd-e6d055e3dfcb" 00:13:13.139 ], 00:13:13.139 "product_name": "Malloc disk", 00:13:13.139 "block_size": 512, 00:13:13.139 "num_blocks": 65536, 00:13:13.139 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:13.139 "assigned_rate_limits": { 00:13:13.139 "rw_ios_per_sec": 0, 00:13:13.139 "rw_mbytes_per_sec": 0, 00:13:13.139 "r_mbytes_per_sec": 0, 00:13:13.139 "w_mbytes_per_sec": 0 00:13:13.139 }, 00:13:13.139 "claimed": false, 00:13:13.139 "zoned": false, 00:13:13.139 "supported_io_types": { 00:13:13.139 "read": true, 00:13:13.139 "write": true, 00:13:13.139 "unmap": true, 00:13:13.139 "flush": true, 00:13:13.139 "reset": true, 00:13:13.139 "nvme_admin": false, 00:13:13.139 "nvme_io": false, 00:13:13.139 "nvme_io_md": false, 00:13:13.139 "write_zeroes": true, 00:13:13.139 "zcopy": true, 00:13:13.139 "get_zone_info": false, 00:13:13.139 "zone_management": false, 00:13:13.139 "zone_append": false, 00:13:13.139 "compare": false, 00:13:13.139 "compare_and_write": false, 00:13:13.139 "abort": true, 00:13:13.139 "seek_hole": false, 00:13:13.139 "seek_data": false, 00:13:13.139 "copy": true, 00:13:13.139 "nvme_iov_md": false 00:13:13.139 }, 00:13:13.139 "memory_domains": [ 00:13:13.139 { 00:13:13.139 "dma_device_id": "system", 00:13:13.139 "dma_device_type": 1 00:13:13.139 }, 00:13:13.139 { 00:13:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.139 "dma_device_type": 2 00:13:13.139 } 00:13:13.139 ], 00:13:13.139 "driver_specific": {} 00:13:13.139 } 00:13:13.139 ] 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.139 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.398 BaseBdev4 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.398 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.398 [ 00:13:13.398 { 00:13:13.398 "name": "BaseBdev4", 00:13:13.398 "aliases": [ 00:13:13.398 "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057" 00:13:13.398 ], 00:13:13.398 "product_name": "Malloc disk", 00:13:13.398 "block_size": 512, 00:13:13.398 "num_blocks": 65536, 00:13:13.398 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:13.399 "assigned_rate_limits": { 00:13:13.399 "rw_ios_per_sec": 0, 00:13:13.399 "rw_mbytes_per_sec": 0, 00:13:13.399 "r_mbytes_per_sec": 0, 00:13:13.399 "w_mbytes_per_sec": 0 00:13:13.399 }, 00:13:13.399 "claimed": false, 00:13:13.399 "zoned": false, 00:13:13.399 "supported_io_types": { 00:13:13.399 "read": true, 00:13:13.399 "write": true, 00:13:13.399 "unmap": true, 00:13:13.399 "flush": true, 00:13:13.399 "reset": true, 00:13:13.399 "nvme_admin": false, 00:13:13.399 "nvme_io": false, 00:13:13.399 "nvme_io_md": false, 00:13:13.399 "write_zeroes": true, 00:13:13.399 "zcopy": true, 00:13:13.399 "get_zone_info": false, 00:13:13.399 "zone_management": false, 00:13:13.399 "zone_append": false, 00:13:13.399 "compare": false, 00:13:13.399 "compare_and_write": false, 00:13:13.399 "abort": true, 00:13:13.399 "seek_hole": false, 00:13:13.399 "seek_data": false, 00:13:13.399 "copy": true, 00:13:13.399 "nvme_iov_md": false 00:13:13.399 }, 00:13:13.399 "memory_domains": [ 00:13:13.399 { 00:13:13.399 "dma_device_id": "system", 00:13:13.399 "dma_device_type": 1 00:13:13.399 }, 00:13:13.399 { 00:13:13.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.399 "dma_device_type": 2 00:13:13.399 } 00:13:13.399 ], 00:13:13.399 "driver_specific": {} 00:13:13.399 } 00:13:13.399 ] 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.399 [2024-11-20 08:46:44.120890] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.399 [2024-11-20 08:46:44.120942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.399 [2024-11-20 08:46:44.120989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.399 [2024-11-20 08:46:44.123319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.399 [2024-11-20 08:46:44.123387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.399 "name": "Existed_Raid", 00:13:13.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.399 "strip_size_kb": 64, 00:13:13.399 "state": "configuring", 00:13:13.399 "raid_level": "concat", 00:13:13.399 "superblock": false, 00:13:13.399 "num_base_bdevs": 4, 00:13:13.399 "num_base_bdevs_discovered": 3, 00:13:13.399 "num_base_bdevs_operational": 4, 00:13:13.399 "base_bdevs_list": [ 00:13:13.399 { 00:13:13.399 "name": "BaseBdev1", 00:13:13.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.399 "is_configured": false, 00:13:13.399 "data_offset": 0, 00:13:13.399 "data_size": 0 00:13:13.399 }, 00:13:13.399 { 00:13:13.399 "name": "BaseBdev2", 00:13:13.399 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:13.399 "is_configured": true, 00:13:13.399 "data_offset": 0, 00:13:13.399 "data_size": 65536 00:13:13.399 }, 00:13:13.399 { 00:13:13.399 "name": "BaseBdev3", 00:13:13.399 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:13.399 "is_configured": true, 00:13:13.399 "data_offset": 0, 00:13:13.399 "data_size": 65536 00:13:13.399 }, 00:13:13.399 { 00:13:13.399 "name": "BaseBdev4", 00:13:13.399 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:13.399 "is_configured": true, 00:13:13.399 "data_offset": 0, 00:13:13.399 "data_size": 65536 00:13:13.399 } 00:13:13.399 ] 00:13:13.399 }' 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.399 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.966 [2024-11-20 08:46:44.649048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.966 "name": "Existed_Raid", 00:13:13.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.966 "strip_size_kb": 64, 00:13:13.966 "state": "configuring", 00:13:13.966 "raid_level": "concat", 00:13:13.966 "superblock": false, 00:13:13.966 "num_base_bdevs": 4, 00:13:13.966 "num_base_bdevs_discovered": 2, 00:13:13.966 "num_base_bdevs_operational": 4, 00:13:13.966 "base_bdevs_list": [ 00:13:13.966 { 00:13:13.966 "name": "BaseBdev1", 00:13:13.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.966 "is_configured": false, 00:13:13.966 "data_offset": 0, 00:13:13.966 "data_size": 0 00:13:13.966 }, 00:13:13.966 { 00:13:13.966 "name": null, 00:13:13.966 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:13.966 "is_configured": false, 00:13:13.966 "data_offset": 0, 00:13:13.966 "data_size": 65536 00:13:13.966 }, 00:13:13.966 { 00:13:13.966 "name": "BaseBdev3", 00:13:13.966 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:13.966 "is_configured": true, 00:13:13.966 "data_offset": 0, 00:13:13.966 "data_size": 65536 00:13:13.966 }, 00:13:13.966 { 00:13:13.966 "name": "BaseBdev4", 00:13:13.966 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:13.966 "is_configured": true, 00:13:13.966 "data_offset": 0, 00:13:13.966 "data_size": 65536 00:13:13.966 } 00:13:13.966 ] 00:13:13.966 }' 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.966 08:46:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.259 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.259 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:14.259 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.259 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 [2024-11-20 08:46:45.255034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.520 BaseBdev1 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 [ 00:13:14.520 { 00:13:14.520 "name": "BaseBdev1", 00:13:14.520 "aliases": [ 00:13:14.520 "5e9bde70-99a2-4964-a7ce-f4bfb56aca39" 00:13:14.520 ], 00:13:14.520 "product_name": "Malloc disk", 00:13:14.520 "block_size": 512, 00:13:14.520 "num_blocks": 65536, 00:13:14.520 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:14.520 "assigned_rate_limits": { 00:13:14.520 "rw_ios_per_sec": 0, 00:13:14.520 "rw_mbytes_per_sec": 0, 00:13:14.520 "r_mbytes_per_sec": 0, 00:13:14.520 "w_mbytes_per_sec": 0 00:13:14.520 }, 00:13:14.520 "claimed": true, 00:13:14.520 "claim_type": "exclusive_write", 00:13:14.520 "zoned": false, 00:13:14.520 "supported_io_types": { 00:13:14.520 "read": true, 00:13:14.520 "write": true, 00:13:14.520 "unmap": true, 00:13:14.520 "flush": true, 00:13:14.520 "reset": true, 00:13:14.520 "nvme_admin": false, 00:13:14.520 "nvme_io": false, 00:13:14.520 "nvme_io_md": false, 00:13:14.520 "write_zeroes": true, 00:13:14.520 "zcopy": true, 00:13:14.520 "get_zone_info": false, 00:13:14.520 "zone_management": false, 00:13:14.520 "zone_append": false, 00:13:14.520 "compare": false, 00:13:14.520 "compare_and_write": false, 00:13:14.520 "abort": true, 00:13:14.520 "seek_hole": false, 00:13:14.520 "seek_data": false, 00:13:14.520 "copy": true, 00:13:14.520 "nvme_iov_md": false 00:13:14.520 }, 00:13:14.520 "memory_domains": [ 00:13:14.520 { 00:13:14.520 "dma_device_id": "system", 00:13:14.520 "dma_device_type": 1 00:13:14.520 }, 00:13:14.520 { 00:13:14.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.520 "dma_device_type": 2 00:13:14.520 } 00:13:14.520 ], 00:13:14.520 "driver_specific": {} 00:13:14.520 } 00:13:14.520 ] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.520 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.520 "name": "Existed_Raid", 00:13:14.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.520 "strip_size_kb": 64, 00:13:14.520 "state": "configuring", 00:13:14.520 "raid_level": "concat", 00:13:14.521 "superblock": false, 00:13:14.521 "num_base_bdevs": 4, 00:13:14.521 "num_base_bdevs_discovered": 3, 00:13:14.521 "num_base_bdevs_operational": 4, 00:13:14.521 "base_bdevs_list": [ 00:13:14.521 { 00:13:14.521 "name": "BaseBdev1", 00:13:14.521 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:14.521 "is_configured": true, 00:13:14.521 "data_offset": 0, 00:13:14.521 "data_size": 65536 00:13:14.521 }, 00:13:14.521 { 00:13:14.521 "name": null, 00:13:14.521 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:14.521 "is_configured": false, 00:13:14.521 "data_offset": 0, 00:13:14.521 "data_size": 65536 00:13:14.521 }, 00:13:14.521 { 00:13:14.521 "name": "BaseBdev3", 00:13:14.521 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:14.521 "is_configured": true, 00:13:14.521 "data_offset": 0, 00:13:14.521 "data_size": 65536 00:13:14.521 }, 00:13:14.521 { 00:13:14.521 "name": "BaseBdev4", 00:13:14.521 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:14.521 "is_configured": true, 00:13:14.521 "data_offset": 0, 00:13:14.521 "data_size": 65536 00:13:14.521 } 00:13:14.521 ] 00:13:14.521 }' 00:13:14.521 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.521 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:15.088 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.089 [2024-11-20 08:46:45.851350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.089 "name": "Existed_Raid", 00:13:15.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.089 "strip_size_kb": 64, 00:13:15.089 "state": "configuring", 00:13:15.089 "raid_level": "concat", 00:13:15.089 "superblock": false, 00:13:15.089 "num_base_bdevs": 4, 00:13:15.089 "num_base_bdevs_discovered": 2, 00:13:15.089 "num_base_bdevs_operational": 4, 00:13:15.089 "base_bdevs_list": [ 00:13:15.089 { 00:13:15.089 "name": "BaseBdev1", 00:13:15.089 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:15.089 "is_configured": true, 00:13:15.089 "data_offset": 0, 00:13:15.089 "data_size": 65536 00:13:15.089 }, 00:13:15.089 { 00:13:15.089 "name": null, 00:13:15.089 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:15.089 "is_configured": false, 00:13:15.089 "data_offset": 0, 00:13:15.089 "data_size": 65536 00:13:15.089 }, 00:13:15.089 { 00:13:15.089 "name": null, 00:13:15.089 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:15.089 "is_configured": false, 00:13:15.089 "data_offset": 0, 00:13:15.089 "data_size": 65536 00:13:15.089 }, 00:13:15.089 { 00:13:15.089 "name": "BaseBdev4", 00:13:15.089 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:15.089 "is_configured": true, 00:13:15.089 "data_offset": 0, 00:13:15.089 "data_size": 65536 00:13:15.089 } 00:13:15.089 ] 00:13:15.089 }' 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.089 08:46:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.658 [2024-11-20 08:46:46.447538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.658 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.659 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.659 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.659 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.659 "name": "Existed_Raid", 00:13:15.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.659 "strip_size_kb": 64, 00:13:15.659 "state": "configuring", 00:13:15.659 "raid_level": "concat", 00:13:15.659 "superblock": false, 00:13:15.659 "num_base_bdevs": 4, 00:13:15.659 "num_base_bdevs_discovered": 3, 00:13:15.659 "num_base_bdevs_operational": 4, 00:13:15.659 "base_bdevs_list": [ 00:13:15.659 { 00:13:15.659 "name": "BaseBdev1", 00:13:15.659 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:15.659 "is_configured": true, 00:13:15.659 "data_offset": 0, 00:13:15.659 "data_size": 65536 00:13:15.659 }, 00:13:15.659 { 00:13:15.659 "name": null, 00:13:15.659 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:15.659 "is_configured": false, 00:13:15.659 "data_offset": 0, 00:13:15.659 "data_size": 65536 00:13:15.659 }, 00:13:15.659 { 00:13:15.659 "name": "BaseBdev3", 00:13:15.659 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:15.659 "is_configured": true, 00:13:15.659 "data_offset": 0, 00:13:15.659 "data_size": 65536 00:13:15.659 }, 00:13:15.659 { 00:13:15.659 "name": "BaseBdev4", 00:13:15.659 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:15.659 "is_configured": true, 00:13:15.659 "data_offset": 0, 00:13:15.659 "data_size": 65536 00:13:15.659 } 00:13:15.659 ] 00:13:15.659 }' 00:13:15.659 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.659 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.225 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.225 08:46:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.225 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 08:46:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:16.226 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.226 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 [2024-11-20 08:46:47.039779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.484 "name": "Existed_Raid", 00:13:16.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.484 "strip_size_kb": 64, 00:13:16.484 "state": "configuring", 00:13:16.484 "raid_level": "concat", 00:13:16.484 "superblock": false, 00:13:16.484 "num_base_bdevs": 4, 00:13:16.484 "num_base_bdevs_discovered": 2, 00:13:16.484 "num_base_bdevs_operational": 4, 00:13:16.484 "base_bdevs_list": [ 00:13:16.484 { 00:13:16.484 "name": null, 00:13:16.484 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:16.484 "is_configured": false, 00:13:16.484 "data_offset": 0, 00:13:16.484 "data_size": 65536 00:13:16.484 }, 00:13:16.484 { 00:13:16.484 "name": null, 00:13:16.484 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:16.484 "is_configured": false, 00:13:16.484 "data_offset": 0, 00:13:16.484 "data_size": 65536 00:13:16.484 }, 00:13:16.484 { 00:13:16.484 "name": "BaseBdev3", 00:13:16.484 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:16.484 "is_configured": true, 00:13:16.484 "data_offset": 0, 00:13:16.484 "data_size": 65536 00:13:16.484 }, 00:13:16.484 { 00:13:16.484 "name": "BaseBdev4", 00:13:16.484 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:16.484 "is_configured": true, 00:13:16.484 "data_offset": 0, 00:13:16.484 "data_size": 65536 00:13:16.484 } 00:13:16.484 ] 00:13:16.484 }' 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.484 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 [2024-11-20 08:46:47.740061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.054 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.054 "name": "Existed_Raid", 00:13:17.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.054 "strip_size_kb": 64, 00:13:17.054 "state": "configuring", 00:13:17.054 "raid_level": "concat", 00:13:17.054 "superblock": false, 00:13:17.054 "num_base_bdevs": 4, 00:13:17.054 "num_base_bdevs_discovered": 3, 00:13:17.054 "num_base_bdevs_operational": 4, 00:13:17.054 "base_bdevs_list": [ 00:13:17.054 { 00:13:17.054 "name": null, 00:13:17.054 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:17.054 "is_configured": false, 00:13:17.054 "data_offset": 0, 00:13:17.054 "data_size": 65536 00:13:17.054 }, 00:13:17.054 { 00:13:17.054 "name": "BaseBdev2", 00:13:17.054 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:17.054 "is_configured": true, 00:13:17.054 "data_offset": 0, 00:13:17.054 "data_size": 65536 00:13:17.054 }, 00:13:17.054 { 00:13:17.054 "name": "BaseBdev3", 00:13:17.054 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:17.054 "is_configured": true, 00:13:17.054 "data_offset": 0, 00:13:17.054 "data_size": 65536 00:13:17.054 }, 00:13:17.054 { 00:13:17.054 "name": "BaseBdev4", 00:13:17.054 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:17.054 "is_configured": true, 00:13:17.054 "data_offset": 0, 00:13:17.054 "data_size": 65536 00:13:17.054 } 00:13:17.055 ] 00:13:17.055 }' 00:13:17.055 08:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.055 08:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.314 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.314 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.314 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.314 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5e9bde70-99a2-4964-a7ce-f4bfb56aca39 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.574 [2024-11-20 08:46:48.351983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:17.574 [2024-11-20 08:46:48.352057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:17.574 [2024-11-20 08:46:48.352068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:17.574 [2024-11-20 08:46:48.352426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:17.574 [2024-11-20 08:46:48.352634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:17.574 [2024-11-20 08:46:48.352655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:17.574 [2024-11-20 08:46:48.352947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.574 NewBaseBdev 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.574 [ 00:13:17.574 { 00:13:17.574 "name": "NewBaseBdev", 00:13:17.574 "aliases": [ 00:13:17.574 "5e9bde70-99a2-4964-a7ce-f4bfb56aca39" 00:13:17.574 ], 00:13:17.574 "product_name": "Malloc disk", 00:13:17.574 "block_size": 512, 00:13:17.574 "num_blocks": 65536, 00:13:17.574 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:17.574 "assigned_rate_limits": { 00:13:17.574 "rw_ios_per_sec": 0, 00:13:17.574 "rw_mbytes_per_sec": 0, 00:13:17.574 "r_mbytes_per_sec": 0, 00:13:17.574 "w_mbytes_per_sec": 0 00:13:17.574 }, 00:13:17.574 "claimed": true, 00:13:17.574 "claim_type": "exclusive_write", 00:13:17.574 "zoned": false, 00:13:17.574 "supported_io_types": { 00:13:17.574 "read": true, 00:13:17.574 "write": true, 00:13:17.574 "unmap": true, 00:13:17.574 "flush": true, 00:13:17.574 "reset": true, 00:13:17.574 "nvme_admin": false, 00:13:17.574 "nvme_io": false, 00:13:17.574 "nvme_io_md": false, 00:13:17.574 "write_zeroes": true, 00:13:17.574 "zcopy": true, 00:13:17.574 "get_zone_info": false, 00:13:17.574 "zone_management": false, 00:13:17.574 "zone_append": false, 00:13:17.574 "compare": false, 00:13:17.574 "compare_and_write": false, 00:13:17.574 "abort": true, 00:13:17.574 "seek_hole": false, 00:13:17.574 "seek_data": false, 00:13:17.574 "copy": true, 00:13:17.574 "nvme_iov_md": false 00:13:17.574 }, 00:13:17.574 "memory_domains": [ 00:13:17.574 { 00:13:17.574 "dma_device_id": "system", 00:13:17.574 "dma_device_type": 1 00:13:17.574 }, 00:13:17.574 { 00:13:17.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.574 "dma_device_type": 2 00:13:17.574 } 00:13:17.574 ], 00:13:17.574 "driver_specific": {} 00:13:17.574 } 00:13:17.574 ] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.574 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.575 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.575 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.575 "name": "Existed_Raid", 00:13:17.575 "uuid": "dbd395cd-8e56-4bb8-89cb-4ac741721664", 00:13:17.575 "strip_size_kb": 64, 00:13:17.575 "state": "online", 00:13:17.575 "raid_level": "concat", 00:13:17.575 "superblock": false, 00:13:17.575 "num_base_bdevs": 4, 00:13:17.575 "num_base_bdevs_discovered": 4, 00:13:17.575 "num_base_bdevs_operational": 4, 00:13:17.575 "base_bdevs_list": [ 00:13:17.575 { 00:13:17.575 "name": "NewBaseBdev", 00:13:17.575 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:17.575 "is_configured": true, 00:13:17.575 "data_offset": 0, 00:13:17.575 "data_size": 65536 00:13:17.575 }, 00:13:17.575 { 00:13:17.575 "name": "BaseBdev2", 00:13:17.575 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:17.575 "is_configured": true, 00:13:17.575 "data_offset": 0, 00:13:17.575 "data_size": 65536 00:13:17.575 }, 00:13:17.575 { 00:13:17.575 "name": "BaseBdev3", 00:13:17.575 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:17.575 "is_configured": true, 00:13:17.575 "data_offset": 0, 00:13:17.575 "data_size": 65536 00:13:17.575 }, 00:13:17.575 { 00:13:17.575 "name": "BaseBdev4", 00:13:17.575 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:17.575 "is_configured": true, 00:13:17.575 "data_offset": 0, 00:13:17.575 "data_size": 65536 00:13:17.575 } 00:13:17.575 ] 00:13:17.575 }' 00:13:17.575 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.575 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.143 [2024-11-20 08:46:48.844681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.143 "name": "Existed_Raid", 00:13:18.143 "aliases": [ 00:13:18.143 "dbd395cd-8e56-4bb8-89cb-4ac741721664" 00:13:18.143 ], 00:13:18.143 "product_name": "Raid Volume", 00:13:18.143 "block_size": 512, 00:13:18.143 "num_blocks": 262144, 00:13:18.143 "uuid": "dbd395cd-8e56-4bb8-89cb-4ac741721664", 00:13:18.143 "assigned_rate_limits": { 00:13:18.143 "rw_ios_per_sec": 0, 00:13:18.143 "rw_mbytes_per_sec": 0, 00:13:18.143 "r_mbytes_per_sec": 0, 00:13:18.143 "w_mbytes_per_sec": 0 00:13:18.143 }, 00:13:18.143 "claimed": false, 00:13:18.143 "zoned": false, 00:13:18.143 "supported_io_types": { 00:13:18.143 "read": true, 00:13:18.143 "write": true, 00:13:18.143 "unmap": true, 00:13:18.143 "flush": true, 00:13:18.143 "reset": true, 00:13:18.143 "nvme_admin": false, 00:13:18.143 "nvme_io": false, 00:13:18.143 "nvme_io_md": false, 00:13:18.143 "write_zeroes": true, 00:13:18.143 "zcopy": false, 00:13:18.143 "get_zone_info": false, 00:13:18.143 "zone_management": false, 00:13:18.143 "zone_append": false, 00:13:18.143 "compare": false, 00:13:18.143 "compare_and_write": false, 00:13:18.143 "abort": false, 00:13:18.143 "seek_hole": false, 00:13:18.143 "seek_data": false, 00:13:18.143 "copy": false, 00:13:18.143 "nvme_iov_md": false 00:13:18.143 }, 00:13:18.143 "memory_domains": [ 00:13:18.143 { 00:13:18.143 "dma_device_id": "system", 00:13:18.143 "dma_device_type": 1 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.143 "dma_device_type": 2 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "system", 00:13:18.143 "dma_device_type": 1 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.143 "dma_device_type": 2 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "system", 00:13:18.143 "dma_device_type": 1 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.143 "dma_device_type": 2 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "system", 00:13:18.143 "dma_device_type": 1 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.143 "dma_device_type": 2 00:13:18.143 } 00:13:18.143 ], 00:13:18.143 "driver_specific": { 00:13:18.143 "raid": { 00:13:18.143 "uuid": "dbd395cd-8e56-4bb8-89cb-4ac741721664", 00:13:18.143 "strip_size_kb": 64, 00:13:18.143 "state": "online", 00:13:18.143 "raid_level": "concat", 00:13:18.143 "superblock": false, 00:13:18.143 "num_base_bdevs": 4, 00:13:18.143 "num_base_bdevs_discovered": 4, 00:13:18.143 "num_base_bdevs_operational": 4, 00:13:18.143 "base_bdevs_list": [ 00:13:18.143 { 00:13:18.143 "name": "NewBaseBdev", 00:13:18.143 "uuid": "5e9bde70-99a2-4964-a7ce-f4bfb56aca39", 00:13:18.143 "is_configured": true, 00:13:18.143 "data_offset": 0, 00:13:18.143 "data_size": 65536 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "name": "BaseBdev2", 00:13:18.143 "uuid": "9f9906fe-30cf-40eb-9e20-a976603eff6e", 00:13:18.143 "is_configured": true, 00:13:18.143 "data_offset": 0, 00:13:18.143 "data_size": 65536 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "name": "BaseBdev3", 00:13:18.143 "uuid": "021c0132-0d8a-49ad-bafd-e6d055e3dfcb", 00:13:18.143 "is_configured": true, 00:13:18.143 "data_offset": 0, 00:13:18.143 "data_size": 65536 00:13:18.143 }, 00:13:18.143 { 00:13:18.143 "name": "BaseBdev4", 00:13:18.143 "uuid": "0a2e8dfb-f2e3-443b-be5a-4f2009bcc057", 00:13:18.143 "is_configured": true, 00:13:18.143 "data_offset": 0, 00:13:18.143 "data_size": 65536 00:13:18.143 } 00:13:18.143 ] 00:13:18.143 } 00:13:18.143 } 00:13:18.143 }' 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.143 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:18.143 BaseBdev2 00:13:18.143 BaseBdev3 00:13:18.143 BaseBdev4' 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.144 08:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.144 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.402 [2024-11-20 08:46:49.176400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.402 [2024-11-20 08:46:49.176437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.402 [2024-11-20 08:46:49.176549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.402 [2024-11-20 08:46:49.176632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.402 [2024-11-20 08:46:49.176648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71378 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71378 ']' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71378 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71378 00:13:18.402 killing process with pid 71378 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71378' 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71378 00:13:18.402 [2024-11-20 08:46:49.214425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.402 08:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71378 00:13:18.661 [2024-11-20 08:46:49.552883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:20.034 00:13:20.034 real 0m13.055s 00:13:20.034 user 0m21.579s 00:13:20.034 sys 0m1.864s 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.034 ************************************ 00:13:20.034 END TEST raid_state_function_test 00:13:20.034 ************************************ 00:13:20.034 08:46:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:20.034 08:46:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:20.034 08:46:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.034 08:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.034 ************************************ 00:13:20.034 START TEST raid_state_function_test_sb 00:13:20.034 ************************************ 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72061 00:13:20.034 Process raid pid: 72061 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72061' 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72061 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72061 ']' 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.034 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.034 [2024-11-20 08:46:50.756327] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:20.034 [2024-11-20 08:46:50.756500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.034 [2024-11-20 08:46:50.943758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.292 [2024-11-20 08:46:51.072817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.551 [2024-11-20 08:46:51.284035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.551 [2024-11-20 08:46:51.284106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.118 [2024-11-20 08:46:51.795123] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.118 [2024-11-20 08:46:51.795260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.118 [2024-11-20 08:46:51.795278] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.118 [2024-11-20 08:46:51.795302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.118 [2024-11-20 08:46:51.795312] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.118 [2024-11-20 08:46:51.795327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.118 [2024-11-20 08:46:51.795337] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.118 [2024-11-20 08:46:51.795351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.118 "name": "Existed_Raid", 00:13:21.118 "uuid": "4592f652-53ac-4e87-a8f9-4ec8afc3e46a", 00:13:21.118 "strip_size_kb": 64, 00:13:21.118 "state": "configuring", 00:13:21.118 "raid_level": "concat", 00:13:21.118 "superblock": true, 00:13:21.118 "num_base_bdevs": 4, 00:13:21.118 "num_base_bdevs_discovered": 0, 00:13:21.118 "num_base_bdevs_operational": 4, 00:13:21.118 "base_bdevs_list": [ 00:13:21.118 { 00:13:21.118 "name": "BaseBdev1", 00:13:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.118 "is_configured": false, 00:13:21.118 "data_offset": 0, 00:13:21.118 "data_size": 0 00:13:21.118 }, 00:13:21.118 { 00:13:21.118 "name": "BaseBdev2", 00:13:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.118 "is_configured": false, 00:13:21.118 "data_offset": 0, 00:13:21.118 "data_size": 0 00:13:21.118 }, 00:13:21.118 { 00:13:21.118 "name": "BaseBdev3", 00:13:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.118 "is_configured": false, 00:13:21.118 "data_offset": 0, 00:13:21.118 "data_size": 0 00:13:21.118 }, 00:13:21.118 { 00:13:21.118 "name": "BaseBdev4", 00:13:21.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.118 "is_configured": false, 00:13:21.118 "data_offset": 0, 00:13:21.118 "data_size": 0 00:13:21.118 } 00:13:21.118 ] 00:13:21.118 }' 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.118 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.685 [2024-11-20 08:46:52.315196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.685 [2024-11-20 08:46:52.315262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.685 [2024-11-20 08:46:52.323252] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.685 [2024-11-20 08:46:52.323305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.685 [2024-11-20 08:46:52.323320] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.685 [2024-11-20 08:46:52.323336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.685 [2024-11-20 08:46:52.323346] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.685 [2024-11-20 08:46:52.323367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.685 [2024-11-20 08:46:52.323376] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.685 [2024-11-20 08:46:52.323398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.685 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.686 [2024-11-20 08:46:52.369580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.686 BaseBdev1 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.686 [ 00:13:21.686 { 00:13:21.686 "name": "BaseBdev1", 00:13:21.686 "aliases": [ 00:13:21.686 "50a511f0-69a8-4f6e-a023-afa6aedcff54" 00:13:21.686 ], 00:13:21.686 "product_name": "Malloc disk", 00:13:21.686 "block_size": 512, 00:13:21.686 "num_blocks": 65536, 00:13:21.686 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:21.686 "assigned_rate_limits": { 00:13:21.686 "rw_ios_per_sec": 0, 00:13:21.686 "rw_mbytes_per_sec": 0, 00:13:21.686 "r_mbytes_per_sec": 0, 00:13:21.686 "w_mbytes_per_sec": 0 00:13:21.686 }, 00:13:21.686 "claimed": true, 00:13:21.686 "claim_type": "exclusive_write", 00:13:21.686 "zoned": false, 00:13:21.686 "supported_io_types": { 00:13:21.686 "read": true, 00:13:21.686 "write": true, 00:13:21.686 "unmap": true, 00:13:21.686 "flush": true, 00:13:21.686 "reset": true, 00:13:21.686 "nvme_admin": false, 00:13:21.686 "nvme_io": false, 00:13:21.686 "nvme_io_md": false, 00:13:21.686 "write_zeroes": true, 00:13:21.686 "zcopy": true, 00:13:21.686 "get_zone_info": false, 00:13:21.686 "zone_management": false, 00:13:21.686 "zone_append": false, 00:13:21.686 "compare": false, 00:13:21.686 "compare_and_write": false, 00:13:21.686 "abort": true, 00:13:21.686 "seek_hole": false, 00:13:21.686 "seek_data": false, 00:13:21.686 "copy": true, 00:13:21.686 "nvme_iov_md": false 00:13:21.686 }, 00:13:21.686 "memory_domains": [ 00:13:21.686 { 00:13:21.686 "dma_device_id": "system", 00:13:21.686 "dma_device_type": 1 00:13:21.686 }, 00:13:21.686 { 00:13:21.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.686 "dma_device_type": 2 00:13:21.686 } 00:13:21.686 ], 00:13:21.686 "driver_specific": {} 00:13:21.686 } 00:13:21.686 ] 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.686 "name": "Existed_Raid", 00:13:21.686 "uuid": "1437f00d-6034-4ab2-9c7c-fc59ed3abe6d", 00:13:21.686 "strip_size_kb": 64, 00:13:21.686 "state": "configuring", 00:13:21.686 "raid_level": "concat", 00:13:21.686 "superblock": true, 00:13:21.686 "num_base_bdevs": 4, 00:13:21.686 "num_base_bdevs_discovered": 1, 00:13:21.686 "num_base_bdevs_operational": 4, 00:13:21.686 "base_bdevs_list": [ 00:13:21.686 { 00:13:21.686 "name": "BaseBdev1", 00:13:21.686 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:21.686 "is_configured": true, 00:13:21.686 "data_offset": 2048, 00:13:21.686 "data_size": 63488 00:13:21.686 }, 00:13:21.686 { 00:13:21.686 "name": "BaseBdev2", 00:13:21.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.686 "is_configured": false, 00:13:21.686 "data_offset": 0, 00:13:21.686 "data_size": 0 00:13:21.686 }, 00:13:21.686 { 00:13:21.686 "name": "BaseBdev3", 00:13:21.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.686 "is_configured": false, 00:13:21.686 "data_offset": 0, 00:13:21.686 "data_size": 0 00:13:21.686 }, 00:13:21.686 { 00:13:21.686 "name": "BaseBdev4", 00:13:21.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.686 "is_configured": false, 00:13:21.686 "data_offset": 0, 00:13:21.686 "data_size": 0 00:13:21.686 } 00:13:21.686 ] 00:13:21.686 }' 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.686 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.253 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.253 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.254 [2024-11-20 08:46:52.901823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.254 [2024-11-20 08:46:52.901901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.254 [2024-11-20 08:46:52.909887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.254 [2024-11-20 08:46:52.912311] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.254 [2024-11-20 08:46:52.912526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.254 [2024-11-20 08:46:52.912554] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.254 [2024-11-20 08:46:52.912575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.254 [2024-11-20 08:46:52.912595] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.254 [2024-11-20 08:46:52.912609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.254 "name": "Existed_Raid", 00:13:22.254 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:22.254 "strip_size_kb": 64, 00:13:22.254 "state": "configuring", 00:13:22.254 "raid_level": "concat", 00:13:22.254 "superblock": true, 00:13:22.254 "num_base_bdevs": 4, 00:13:22.254 "num_base_bdevs_discovered": 1, 00:13:22.254 "num_base_bdevs_operational": 4, 00:13:22.254 "base_bdevs_list": [ 00:13:22.254 { 00:13:22.254 "name": "BaseBdev1", 00:13:22.254 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:22.254 "is_configured": true, 00:13:22.254 "data_offset": 2048, 00:13:22.254 "data_size": 63488 00:13:22.254 }, 00:13:22.254 { 00:13:22.254 "name": "BaseBdev2", 00:13:22.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.254 "is_configured": false, 00:13:22.254 "data_offset": 0, 00:13:22.254 "data_size": 0 00:13:22.254 }, 00:13:22.254 { 00:13:22.254 "name": "BaseBdev3", 00:13:22.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.254 "is_configured": false, 00:13:22.254 "data_offset": 0, 00:13:22.254 "data_size": 0 00:13:22.254 }, 00:13:22.254 { 00:13:22.254 "name": "BaseBdev4", 00:13:22.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.254 "is_configured": false, 00:13:22.254 "data_offset": 0, 00:13:22.254 "data_size": 0 00:13:22.254 } 00:13:22.254 ] 00:13:22.254 }' 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.254 08:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.821 [2024-11-20 08:46:53.512387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.821 BaseBdev2 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.821 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.822 [ 00:13:22.822 { 00:13:22.822 "name": "BaseBdev2", 00:13:22.822 "aliases": [ 00:13:22.822 "a3b8ea46-39d6-4a57-b933-c880e65ff8eb" 00:13:22.822 ], 00:13:22.822 "product_name": "Malloc disk", 00:13:22.822 "block_size": 512, 00:13:22.822 "num_blocks": 65536, 00:13:22.822 "uuid": "a3b8ea46-39d6-4a57-b933-c880e65ff8eb", 00:13:22.822 "assigned_rate_limits": { 00:13:22.822 "rw_ios_per_sec": 0, 00:13:22.822 "rw_mbytes_per_sec": 0, 00:13:22.822 "r_mbytes_per_sec": 0, 00:13:22.822 "w_mbytes_per_sec": 0 00:13:22.822 }, 00:13:22.822 "claimed": true, 00:13:22.822 "claim_type": "exclusive_write", 00:13:22.822 "zoned": false, 00:13:22.822 "supported_io_types": { 00:13:22.822 "read": true, 00:13:22.822 "write": true, 00:13:22.822 "unmap": true, 00:13:22.822 "flush": true, 00:13:22.822 "reset": true, 00:13:22.822 "nvme_admin": false, 00:13:22.822 "nvme_io": false, 00:13:22.822 "nvme_io_md": false, 00:13:22.822 "write_zeroes": true, 00:13:22.822 "zcopy": true, 00:13:22.822 "get_zone_info": false, 00:13:22.822 "zone_management": false, 00:13:22.822 "zone_append": false, 00:13:22.822 "compare": false, 00:13:22.822 "compare_and_write": false, 00:13:22.822 "abort": true, 00:13:22.822 "seek_hole": false, 00:13:22.822 "seek_data": false, 00:13:22.822 "copy": true, 00:13:22.822 "nvme_iov_md": false 00:13:22.822 }, 00:13:22.822 "memory_domains": [ 00:13:22.822 { 00:13:22.822 "dma_device_id": "system", 00:13:22.822 "dma_device_type": 1 00:13:22.822 }, 00:13:22.822 { 00:13:22.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.822 "dma_device_type": 2 00:13:22.822 } 00:13:22.822 ], 00:13:22.822 "driver_specific": {} 00:13:22.822 } 00:13:22.822 ] 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.822 "name": "Existed_Raid", 00:13:22.822 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:22.822 "strip_size_kb": 64, 00:13:22.822 "state": "configuring", 00:13:22.822 "raid_level": "concat", 00:13:22.822 "superblock": true, 00:13:22.822 "num_base_bdevs": 4, 00:13:22.822 "num_base_bdevs_discovered": 2, 00:13:22.822 "num_base_bdevs_operational": 4, 00:13:22.822 "base_bdevs_list": [ 00:13:22.822 { 00:13:22.822 "name": "BaseBdev1", 00:13:22.822 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:22.822 "is_configured": true, 00:13:22.822 "data_offset": 2048, 00:13:22.822 "data_size": 63488 00:13:22.822 }, 00:13:22.822 { 00:13:22.822 "name": "BaseBdev2", 00:13:22.822 "uuid": "a3b8ea46-39d6-4a57-b933-c880e65ff8eb", 00:13:22.822 "is_configured": true, 00:13:22.822 "data_offset": 2048, 00:13:22.822 "data_size": 63488 00:13:22.822 }, 00:13:22.822 { 00:13:22.822 "name": "BaseBdev3", 00:13:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.822 "is_configured": false, 00:13:22.822 "data_offset": 0, 00:13:22.822 "data_size": 0 00:13:22.822 }, 00:13:22.822 { 00:13:22.822 "name": "BaseBdev4", 00:13:22.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.822 "is_configured": false, 00:13:22.822 "data_offset": 0, 00:13:22.822 "data_size": 0 00:13:22.822 } 00:13:22.822 ] 00:13:22.822 }' 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.822 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.390 [2024-11-20 08:46:54.158543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.390 BaseBdev3 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.390 [ 00:13:23.390 { 00:13:23.390 "name": "BaseBdev3", 00:13:23.390 "aliases": [ 00:13:23.390 "fc90e7f1-2acd-49d8-ba5c-4a9502598386" 00:13:23.390 ], 00:13:23.390 "product_name": "Malloc disk", 00:13:23.390 "block_size": 512, 00:13:23.390 "num_blocks": 65536, 00:13:23.390 "uuid": "fc90e7f1-2acd-49d8-ba5c-4a9502598386", 00:13:23.390 "assigned_rate_limits": { 00:13:23.390 "rw_ios_per_sec": 0, 00:13:23.390 "rw_mbytes_per_sec": 0, 00:13:23.390 "r_mbytes_per_sec": 0, 00:13:23.390 "w_mbytes_per_sec": 0 00:13:23.390 }, 00:13:23.390 "claimed": true, 00:13:23.390 "claim_type": "exclusive_write", 00:13:23.390 "zoned": false, 00:13:23.390 "supported_io_types": { 00:13:23.390 "read": true, 00:13:23.390 "write": true, 00:13:23.390 "unmap": true, 00:13:23.390 "flush": true, 00:13:23.390 "reset": true, 00:13:23.390 "nvme_admin": false, 00:13:23.390 "nvme_io": false, 00:13:23.390 "nvme_io_md": false, 00:13:23.390 "write_zeroes": true, 00:13:23.390 "zcopy": true, 00:13:23.390 "get_zone_info": false, 00:13:23.390 "zone_management": false, 00:13:23.390 "zone_append": false, 00:13:23.390 "compare": false, 00:13:23.390 "compare_and_write": false, 00:13:23.390 "abort": true, 00:13:23.390 "seek_hole": false, 00:13:23.390 "seek_data": false, 00:13:23.390 "copy": true, 00:13:23.390 "nvme_iov_md": false 00:13:23.390 }, 00:13:23.390 "memory_domains": [ 00:13:23.390 { 00:13:23.390 "dma_device_id": "system", 00:13:23.390 "dma_device_type": 1 00:13:23.390 }, 00:13:23.390 { 00:13:23.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.390 "dma_device_type": 2 00:13:23.390 } 00:13:23.390 ], 00:13:23.390 "driver_specific": {} 00:13:23.390 } 00:13:23.390 ] 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.390 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.390 "name": "Existed_Raid", 00:13:23.390 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:23.390 "strip_size_kb": 64, 00:13:23.390 "state": "configuring", 00:13:23.390 "raid_level": "concat", 00:13:23.390 "superblock": true, 00:13:23.390 "num_base_bdevs": 4, 00:13:23.390 "num_base_bdevs_discovered": 3, 00:13:23.390 "num_base_bdevs_operational": 4, 00:13:23.390 "base_bdevs_list": [ 00:13:23.390 { 00:13:23.390 "name": "BaseBdev1", 00:13:23.390 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:23.390 "is_configured": true, 00:13:23.390 "data_offset": 2048, 00:13:23.391 "data_size": 63488 00:13:23.391 }, 00:13:23.391 { 00:13:23.391 "name": "BaseBdev2", 00:13:23.391 "uuid": "a3b8ea46-39d6-4a57-b933-c880e65ff8eb", 00:13:23.391 "is_configured": true, 00:13:23.391 "data_offset": 2048, 00:13:23.391 "data_size": 63488 00:13:23.391 }, 00:13:23.391 { 00:13:23.391 "name": "BaseBdev3", 00:13:23.391 "uuid": "fc90e7f1-2acd-49d8-ba5c-4a9502598386", 00:13:23.391 "is_configured": true, 00:13:23.391 "data_offset": 2048, 00:13:23.391 "data_size": 63488 00:13:23.391 }, 00:13:23.391 { 00:13:23.391 "name": "BaseBdev4", 00:13:23.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.391 "is_configured": false, 00:13:23.391 "data_offset": 0, 00:13:23.391 "data_size": 0 00:13:23.391 } 00:13:23.391 ] 00:13:23.391 }' 00:13:23.391 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.391 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.958 [2024-11-20 08:46:54.743999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.958 [2024-11-20 08:46:54.744374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.958 [2024-11-20 08:46:54.744396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:23.958 BaseBdev4 00:13:23.958 [2024-11-20 08:46:54.744729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:23.958 [2024-11-20 08:46:54.744963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.958 [2024-11-20 08:46:54.744990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:23.958 [2024-11-20 08:46:54.745211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.958 [ 00:13:23.958 { 00:13:23.958 "name": "BaseBdev4", 00:13:23.958 "aliases": [ 00:13:23.958 "2b28b4e6-0f88-46dc-980a-ab4f16537f45" 00:13:23.958 ], 00:13:23.958 "product_name": "Malloc disk", 00:13:23.958 "block_size": 512, 00:13:23.958 "num_blocks": 65536, 00:13:23.958 "uuid": "2b28b4e6-0f88-46dc-980a-ab4f16537f45", 00:13:23.958 "assigned_rate_limits": { 00:13:23.958 "rw_ios_per_sec": 0, 00:13:23.958 "rw_mbytes_per_sec": 0, 00:13:23.958 "r_mbytes_per_sec": 0, 00:13:23.958 "w_mbytes_per_sec": 0 00:13:23.958 }, 00:13:23.958 "claimed": true, 00:13:23.958 "claim_type": "exclusive_write", 00:13:23.958 "zoned": false, 00:13:23.958 "supported_io_types": { 00:13:23.958 "read": true, 00:13:23.958 "write": true, 00:13:23.958 "unmap": true, 00:13:23.958 "flush": true, 00:13:23.958 "reset": true, 00:13:23.958 "nvme_admin": false, 00:13:23.958 "nvme_io": false, 00:13:23.958 "nvme_io_md": false, 00:13:23.958 "write_zeroes": true, 00:13:23.958 "zcopy": true, 00:13:23.958 "get_zone_info": false, 00:13:23.958 "zone_management": false, 00:13:23.958 "zone_append": false, 00:13:23.958 "compare": false, 00:13:23.958 "compare_and_write": false, 00:13:23.958 "abort": true, 00:13:23.958 "seek_hole": false, 00:13:23.958 "seek_data": false, 00:13:23.958 "copy": true, 00:13:23.958 "nvme_iov_md": false 00:13:23.958 }, 00:13:23.958 "memory_domains": [ 00:13:23.958 { 00:13:23.958 "dma_device_id": "system", 00:13:23.958 "dma_device_type": 1 00:13:23.958 }, 00:13:23.958 { 00:13:23.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.958 "dma_device_type": 2 00:13:23.958 } 00:13:23.958 ], 00:13:23.958 "driver_specific": {} 00:13:23.958 } 00:13:23.958 ] 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.958 "name": "Existed_Raid", 00:13:23.958 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:23.958 "strip_size_kb": 64, 00:13:23.958 "state": "online", 00:13:23.958 "raid_level": "concat", 00:13:23.958 "superblock": true, 00:13:23.958 "num_base_bdevs": 4, 00:13:23.958 "num_base_bdevs_discovered": 4, 00:13:23.958 "num_base_bdevs_operational": 4, 00:13:23.958 "base_bdevs_list": [ 00:13:23.958 { 00:13:23.958 "name": "BaseBdev1", 00:13:23.958 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:23.958 "is_configured": true, 00:13:23.958 "data_offset": 2048, 00:13:23.958 "data_size": 63488 00:13:23.958 }, 00:13:23.958 { 00:13:23.958 "name": "BaseBdev2", 00:13:23.958 "uuid": "a3b8ea46-39d6-4a57-b933-c880e65ff8eb", 00:13:23.958 "is_configured": true, 00:13:23.958 "data_offset": 2048, 00:13:23.958 "data_size": 63488 00:13:23.958 }, 00:13:23.958 { 00:13:23.958 "name": "BaseBdev3", 00:13:23.958 "uuid": "fc90e7f1-2acd-49d8-ba5c-4a9502598386", 00:13:23.958 "is_configured": true, 00:13:23.958 "data_offset": 2048, 00:13:23.958 "data_size": 63488 00:13:23.958 }, 00:13:23.958 { 00:13:23.958 "name": "BaseBdev4", 00:13:23.958 "uuid": "2b28b4e6-0f88-46dc-980a-ab4f16537f45", 00:13:23.958 "is_configured": true, 00:13:23.958 "data_offset": 2048, 00:13:23.958 "data_size": 63488 00:13:23.958 } 00:13:23.958 ] 00:13:23.958 }' 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.958 08:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.525 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.526 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.526 [2024-11-20 08:46:55.376751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.526 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.526 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.526 "name": "Existed_Raid", 00:13:24.526 "aliases": [ 00:13:24.526 "ad0581e2-32e7-4254-8373-4413e4fd3f0d" 00:13:24.526 ], 00:13:24.526 "product_name": "Raid Volume", 00:13:24.526 "block_size": 512, 00:13:24.526 "num_blocks": 253952, 00:13:24.526 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:24.526 "assigned_rate_limits": { 00:13:24.526 "rw_ios_per_sec": 0, 00:13:24.526 "rw_mbytes_per_sec": 0, 00:13:24.526 "r_mbytes_per_sec": 0, 00:13:24.526 "w_mbytes_per_sec": 0 00:13:24.526 }, 00:13:24.526 "claimed": false, 00:13:24.526 "zoned": false, 00:13:24.526 "supported_io_types": { 00:13:24.526 "read": true, 00:13:24.526 "write": true, 00:13:24.526 "unmap": true, 00:13:24.526 "flush": true, 00:13:24.526 "reset": true, 00:13:24.526 "nvme_admin": false, 00:13:24.526 "nvme_io": false, 00:13:24.526 "nvme_io_md": false, 00:13:24.526 "write_zeroes": true, 00:13:24.526 "zcopy": false, 00:13:24.526 "get_zone_info": false, 00:13:24.526 "zone_management": false, 00:13:24.526 "zone_append": false, 00:13:24.526 "compare": false, 00:13:24.526 "compare_and_write": false, 00:13:24.526 "abort": false, 00:13:24.526 "seek_hole": false, 00:13:24.526 "seek_data": false, 00:13:24.526 "copy": false, 00:13:24.526 "nvme_iov_md": false 00:13:24.526 }, 00:13:24.526 "memory_domains": [ 00:13:24.526 { 00:13:24.526 "dma_device_id": "system", 00:13:24.526 "dma_device_type": 1 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.526 "dma_device_type": 2 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "system", 00:13:24.526 "dma_device_type": 1 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.526 "dma_device_type": 2 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "system", 00:13:24.526 "dma_device_type": 1 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.526 "dma_device_type": 2 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "system", 00:13:24.526 "dma_device_type": 1 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.526 "dma_device_type": 2 00:13:24.526 } 00:13:24.526 ], 00:13:24.526 "driver_specific": { 00:13:24.526 "raid": { 00:13:24.526 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:24.526 "strip_size_kb": 64, 00:13:24.526 "state": "online", 00:13:24.526 "raid_level": "concat", 00:13:24.526 "superblock": true, 00:13:24.526 "num_base_bdevs": 4, 00:13:24.526 "num_base_bdevs_discovered": 4, 00:13:24.526 "num_base_bdevs_operational": 4, 00:13:24.526 "base_bdevs_list": [ 00:13:24.526 { 00:13:24.526 "name": "BaseBdev1", 00:13:24.526 "uuid": "50a511f0-69a8-4f6e-a023-afa6aedcff54", 00:13:24.526 "is_configured": true, 00:13:24.526 "data_offset": 2048, 00:13:24.526 "data_size": 63488 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "name": "BaseBdev2", 00:13:24.526 "uuid": "a3b8ea46-39d6-4a57-b933-c880e65ff8eb", 00:13:24.526 "is_configured": true, 00:13:24.526 "data_offset": 2048, 00:13:24.526 "data_size": 63488 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "name": "BaseBdev3", 00:13:24.526 "uuid": "fc90e7f1-2acd-49d8-ba5c-4a9502598386", 00:13:24.526 "is_configured": true, 00:13:24.526 "data_offset": 2048, 00:13:24.526 "data_size": 63488 00:13:24.526 }, 00:13:24.526 { 00:13:24.526 "name": "BaseBdev4", 00:13:24.526 "uuid": "2b28b4e6-0f88-46dc-980a-ab4f16537f45", 00:13:24.526 "is_configured": true, 00:13:24.526 "data_offset": 2048, 00:13:24.526 "data_size": 63488 00:13:24.526 } 00:13:24.526 ] 00:13:24.526 } 00:13:24.526 } 00:13:24.526 }' 00:13:24.526 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:24.786 BaseBdev2 00:13:24.786 BaseBdev3 00:13:24.786 BaseBdev4' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.786 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.787 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.048 [2024-11-20 08:46:55.752447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.048 [2024-11-20 08:46:55.752640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.048 [2024-11-20 08:46:55.752726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.048 "name": "Existed_Raid", 00:13:25.048 "uuid": "ad0581e2-32e7-4254-8373-4413e4fd3f0d", 00:13:25.048 "strip_size_kb": 64, 00:13:25.048 "state": "offline", 00:13:25.048 "raid_level": "concat", 00:13:25.048 "superblock": true, 00:13:25.048 "num_base_bdevs": 4, 00:13:25.048 "num_base_bdevs_discovered": 3, 00:13:25.048 "num_base_bdevs_operational": 3, 00:13:25.048 "base_bdevs_list": [ 00:13:25.048 { 00:13:25.048 "name": null, 00:13:25.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.048 "is_configured": false, 00:13:25.048 "data_offset": 0, 00:13:25.048 "data_size": 63488 00:13:25.048 }, 00:13:25.048 { 00:13:25.048 "name": "BaseBdev2", 00:13:25.048 "uuid": "a3b8ea46-39d6-4a57-b933-c880e65ff8eb", 00:13:25.048 "is_configured": true, 00:13:25.048 "data_offset": 2048, 00:13:25.048 "data_size": 63488 00:13:25.048 }, 00:13:25.048 { 00:13:25.048 "name": "BaseBdev3", 00:13:25.048 "uuid": "fc90e7f1-2acd-49d8-ba5c-4a9502598386", 00:13:25.048 "is_configured": true, 00:13:25.048 "data_offset": 2048, 00:13:25.048 "data_size": 63488 00:13:25.048 }, 00:13:25.048 { 00:13:25.048 "name": "BaseBdev4", 00:13:25.048 "uuid": "2b28b4e6-0f88-46dc-980a-ab4f16537f45", 00:13:25.048 "is_configured": true, 00:13:25.048 "data_offset": 2048, 00:13:25.048 "data_size": 63488 00:13:25.048 } 00:13:25.048 ] 00:13:25.048 }' 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.048 08:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.617 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.617 [2024-11-20 08:46:56.441670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.876 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.877 [2024-11-20 08:46:56.591105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.877 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.877 [2024-11-20 08:46:56.746009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:25.877 [2024-11-20 08:46:56.746071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 BaseBdev2 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 [ 00:13:26.137 { 00:13:26.137 "name": "BaseBdev2", 00:13:26.137 "aliases": [ 00:13:26.137 "009aafe2-920a-4e53-a849-025a35c9f179" 00:13:26.137 ], 00:13:26.137 "product_name": "Malloc disk", 00:13:26.137 "block_size": 512, 00:13:26.137 "num_blocks": 65536, 00:13:26.137 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:26.137 "assigned_rate_limits": { 00:13:26.137 "rw_ios_per_sec": 0, 00:13:26.137 "rw_mbytes_per_sec": 0, 00:13:26.137 "r_mbytes_per_sec": 0, 00:13:26.137 "w_mbytes_per_sec": 0 00:13:26.137 }, 00:13:26.137 "claimed": false, 00:13:26.137 "zoned": false, 00:13:26.137 "supported_io_types": { 00:13:26.137 "read": true, 00:13:26.137 "write": true, 00:13:26.137 "unmap": true, 00:13:26.137 "flush": true, 00:13:26.137 "reset": true, 00:13:26.137 "nvme_admin": false, 00:13:26.137 "nvme_io": false, 00:13:26.137 "nvme_io_md": false, 00:13:26.137 "write_zeroes": true, 00:13:26.137 "zcopy": true, 00:13:26.137 "get_zone_info": false, 00:13:26.137 "zone_management": false, 00:13:26.137 "zone_append": false, 00:13:26.137 "compare": false, 00:13:26.137 "compare_and_write": false, 00:13:26.137 "abort": true, 00:13:26.137 "seek_hole": false, 00:13:26.137 "seek_data": false, 00:13:26.137 "copy": true, 00:13:26.137 "nvme_iov_md": false 00:13:26.137 }, 00:13:26.137 "memory_domains": [ 00:13:26.137 { 00:13:26.137 "dma_device_id": "system", 00:13:26.137 "dma_device_type": 1 00:13:26.137 }, 00:13:26.137 { 00:13:26.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.137 "dma_device_type": 2 00:13:26.137 } 00:13:26.137 ], 00:13:26.137 "driver_specific": {} 00:13:26.137 } 00:13:26.137 ] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 BaseBdev3 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.137 [ 00:13:26.137 { 00:13:26.137 "name": "BaseBdev3", 00:13:26.137 "aliases": [ 00:13:26.137 "a105ed87-aa24-4da1-b5d4-f5601c81e7e4" 00:13:26.137 ], 00:13:26.137 "product_name": "Malloc disk", 00:13:26.137 "block_size": 512, 00:13:26.137 "num_blocks": 65536, 00:13:26.137 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:26.137 "assigned_rate_limits": { 00:13:26.137 "rw_ios_per_sec": 0, 00:13:26.137 "rw_mbytes_per_sec": 0, 00:13:26.137 "r_mbytes_per_sec": 0, 00:13:26.137 "w_mbytes_per_sec": 0 00:13:26.137 }, 00:13:26.137 "claimed": false, 00:13:26.137 "zoned": false, 00:13:26.137 "supported_io_types": { 00:13:26.137 "read": true, 00:13:26.137 "write": true, 00:13:26.137 "unmap": true, 00:13:26.137 "flush": true, 00:13:26.137 "reset": true, 00:13:26.137 "nvme_admin": false, 00:13:26.137 "nvme_io": false, 00:13:26.137 "nvme_io_md": false, 00:13:26.137 "write_zeroes": true, 00:13:26.137 "zcopy": true, 00:13:26.137 "get_zone_info": false, 00:13:26.137 "zone_management": false, 00:13:26.137 "zone_append": false, 00:13:26.137 "compare": false, 00:13:26.137 "compare_and_write": false, 00:13:26.137 "abort": true, 00:13:26.137 "seek_hole": false, 00:13:26.137 "seek_data": false, 00:13:26.137 "copy": true, 00:13:26.137 "nvme_iov_md": false 00:13:26.137 }, 00:13:26.137 "memory_domains": [ 00:13:26.137 { 00:13:26.137 "dma_device_id": "system", 00:13:26.137 "dma_device_type": 1 00:13:26.137 }, 00:13:26.137 { 00:13:26.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.137 "dma_device_type": 2 00:13:26.137 } 00:13:26.137 ], 00:13:26.137 "driver_specific": {} 00:13:26.137 } 00:13:26.137 ] 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.137 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.398 BaseBdev4 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.398 [ 00:13:26.398 { 00:13:26.398 "name": "BaseBdev4", 00:13:26.398 "aliases": [ 00:13:26.398 "3227ba91-27f6-4a84-87a4-345274caaa8c" 00:13:26.398 ], 00:13:26.398 "product_name": "Malloc disk", 00:13:26.398 "block_size": 512, 00:13:26.398 "num_blocks": 65536, 00:13:26.398 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:26.398 "assigned_rate_limits": { 00:13:26.398 "rw_ios_per_sec": 0, 00:13:26.398 "rw_mbytes_per_sec": 0, 00:13:26.398 "r_mbytes_per_sec": 0, 00:13:26.398 "w_mbytes_per_sec": 0 00:13:26.398 }, 00:13:26.398 "claimed": false, 00:13:26.398 "zoned": false, 00:13:26.398 "supported_io_types": { 00:13:26.398 "read": true, 00:13:26.398 "write": true, 00:13:26.398 "unmap": true, 00:13:26.398 "flush": true, 00:13:26.398 "reset": true, 00:13:26.398 "nvme_admin": false, 00:13:26.398 "nvme_io": false, 00:13:26.398 "nvme_io_md": false, 00:13:26.398 "write_zeroes": true, 00:13:26.398 "zcopy": true, 00:13:26.398 "get_zone_info": false, 00:13:26.398 "zone_management": false, 00:13:26.398 "zone_append": false, 00:13:26.398 "compare": false, 00:13:26.398 "compare_and_write": false, 00:13:26.398 "abort": true, 00:13:26.398 "seek_hole": false, 00:13:26.398 "seek_data": false, 00:13:26.398 "copy": true, 00:13:26.398 "nvme_iov_md": false 00:13:26.398 }, 00:13:26.398 "memory_domains": [ 00:13:26.398 { 00:13:26.398 "dma_device_id": "system", 00:13:26.398 "dma_device_type": 1 00:13:26.398 }, 00:13:26.398 { 00:13:26.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.398 "dma_device_type": 2 00:13:26.398 } 00:13:26.398 ], 00:13:26.398 "driver_specific": {} 00:13:26.398 } 00:13:26.398 ] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.398 [2024-11-20 08:46:57.127229] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.398 [2024-11-20 08:46:57.127437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.398 [2024-11-20 08:46:57.127575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.398 [2024-11-20 08:46:57.130215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.398 [2024-11-20 08:46:57.130419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.398 "name": "Existed_Raid", 00:13:26.398 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:26.398 "strip_size_kb": 64, 00:13:26.398 "state": "configuring", 00:13:26.398 "raid_level": "concat", 00:13:26.398 "superblock": true, 00:13:26.398 "num_base_bdevs": 4, 00:13:26.398 "num_base_bdevs_discovered": 3, 00:13:26.398 "num_base_bdevs_operational": 4, 00:13:26.398 "base_bdevs_list": [ 00:13:26.398 { 00:13:26.398 "name": "BaseBdev1", 00:13:26.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.398 "is_configured": false, 00:13:26.398 "data_offset": 0, 00:13:26.398 "data_size": 0 00:13:26.398 }, 00:13:26.398 { 00:13:26.398 "name": "BaseBdev2", 00:13:26.398 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:26.398 "is_configured": true, 00:13:26.398 "data_offset": 2048, 00:13:26.398 "data_size": 63488 00:13:26.398 }, 00:13:26.398 { 00:13:26.398 "name": "BaseBdev3", 00:13:26.398 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:26.398 "is_configured": true, 00:13:26.398 "data_offset": 2048, 00:13:26.398 "data_size": 63488 00:13:26.398 }, 00:13:26.398 { 00:13:26.398 "name": "BaseBdev4", 00:13:26.398 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:26.398 "is_configured": true, 00:13:26.398 "data_offset": 2048, 00:13:26.398 "data_size": 63488 00:13:26.398 } 00:13:26.398 ] 00:13:26.398 }' 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.398 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.966 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.966 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.966 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.967 [2024-11-20 08:46:57.627365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.967 "name": "Existed_Raid", 00:13:26.967 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:26.967 "strip_size_kb": 64, 00:13:26.967 "state": "configuring", 00:13:26.967 "raid_level": "concat", 00:13:26.967 "superblock": true, 00:13:26.967 "num_base_bdevs": 4, 00:13:26.967 "num_base_bdevs_discovered": 2, 00:13:26.967 "num_base_bdevs_operational": 4, 00:13:26.967 "base_bdevs_list": [ 00:13:26.967 { 00:13:26.967 "name": "BaseBdev1", 00:13:26.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.967 "is_configured": false, 00:13:26.967 "data_offset": 0, 00:13:26.967 "data_size": 0 00:13:26.967 }, 00:13:26.967 { 00:13:26.967 "name": null, 00:13:26.967 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:26.967 "is_configured": false, 00:13:26.967 "data_offset": 0, 00:13:26.967 "data_size": 63488 00:13:26.967 }, 00:13:26.967 { 00:13:26.967 "name": "BaseBdev3", 00:13:26.967 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:26.967 "is_configured": true, 00:13:26.967 "data_offset": 2048, 00:13:26.967 "data_size": 63488 00:13:26.967 }, 00:13:26.967 { 00:13:26.967 "name": "BaseBdev4", 00:13:26.967 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:26.967 "is_configured": true, 00:13:26.967 "data_offset": 2048, 00:13:26.967 "data_size": 63488 00:13:26.967 } 00:13:26.967 ] 00:13:26.967 }' 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.967 08:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.226 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.226 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.226 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.226 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.486 [2024-11-20 08:46:58.227363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.486 BaseBdev1 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.486 [ 00:13:27.486 { 00:13:27.486 "name": "BaseBdev1", 00:13:27.486 "aliases": [ 00:13:27.486 "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d" 00:13:27.486 ], 00:13:27.486 "product_name": "Malloc disk", 00:13:27.486 "block_size": 512, 00:13:27.486 "num_blocks": 65536, 00:13:27.486 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:27.486 "assigned_rate_limits": { 00:13:27.486 "rw_ios_per_sec": 0, 00:13:27.486 "rw_mbytes_per_sec": 0, 00:13:27.486 "r_mbytes_per_sec": 0, 00:13:27.486 "w_mbytes_per_sec": 0 00:13:27.486 }, 00:13:27.486 "claimed": true, 00:13:27.486 "claim_type": "exclusive_write", 00:13:27.486 "zoned": false, 00:13:27.486 "supported_io_types": { 00:13:27.486 "read": true, 00:13:27.486 "write": true, 00:13:27.486 "unmap": true, 00:13:27.486 "flush": true, 00:13:27.486 "reset": true, 00:13:27.486 "nvme_admin": false, 00:13:27.486 "nvme_io": false, 00:13:27.486 "nvme_io_md": false, 00:13:27.486 "write_zeroes": true, 00:13:27.486 "zcopy": true, 00:13:27.486 "get_zone_info": false, 00:13:27.486 "zone_management": false, 00:13:27.486 "zone_append": false, 00:13:27.486 "compare": false, 00:13:27.486 "compare_and_write": false, 00:13:27.486 "abort": true, 00:13:27.486 "seek_hole": false, 00:13:27.486 "seek_data": false, 00:13:27.486 "copy": true, 00:13:27.486 "nvme_iov_md": false 00:13:27.486 }, 00:13:27.486 "memory_domains": [ 00:13:27.486 { 00:13:27.486 "dma_device_id": "system", 00:13:27.486 "dma_device_type": 1 00:13:27.486 }, 00:13:27.486 { 00:13:27.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.486 "dma_device_type": 2 00:13:27.486 } 00:13:27.486 ], 00:13:27.486 "driver_specific": {} 00:13:27.486 } 00:13:27.486 ] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.486 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.486 "name": "Existed_Raid", 00:13:27.486 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:27.486 "strip_size_kb": 64, 00:13:27.486 "state": "configuring", 00:13:27.486 "raid_level": "concat", 00:13:27.486 "superblock": true, 00:13:27.486 "num_base_bdevs": 4, 00:13:27.486 "num_base_bdevs_discovered": 3, 00:13:27.486 "num_base_bdevs_operational": 4, 00:13:27.486 "base_bdevs_list": [ 00:13:27.486 { 00:13:27.486 "name": "BaseBdev1", 00:13:27.486 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:27.486 "is_configured": true, 00:13:27.486 "data_offset": 2048, 00:13:27.486 "data_size": 63488 00:13:27.486 }, 00:13:27.487 { 00:13:27.487 "name": null, 00:13:27.487 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:27.487 "is_configured": false, 00:13:27.487 "data_offset": 0, 00:13:27.487 "data_size": 63488 00:13:27.487 }, 00:13:27.487 { 00:13:27.487 "name": "BaseBdev3", 00:13:27.487 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:27.487 "is_configured": true, 00:13:27.487 "data_offset": 2048, 00:13:27.487 "data_size": 63488 00:13:27.487 }, 00:13:27.487 { 00:13:27.487 "name": "BaseBdev4", 00:13:27.487 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:27.487 "is_configured": true, 00:13:27.487 "data_offset": 2048, 00:13:27.487 "data_size": 63488 00:13:27.487 } 00:13:27.487 ] 00:13:27.487 }' 00:13:27.487 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.487 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.055 [2024-11-20 08:46:58.867639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.055 "name": "Existed_Raid", 00:13:28.055 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:28.055 "strip_size_kb": 64, 00:13:28.055 "state": "configuring", 00:13:28.055 "raid_level": "concat", 00:13:28.055 "superblock": true, 00:13:28.055 "num_base_bdevs": 4, 00:13:28.055 "num_base_bdevs_discovered": 2, 00:13:28.055 "num_base_bdevs_operational": 4, 00:13:28.055 "base_bdevs_list": [ 00:13:28.055 { 00:13:28.055 "name": "BaseBdev1", 00:13:28.055 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:28.055 "is_configured": true, 00:13:28.055 "data_offset": 2048, 00:13:28.055 "data_size": 63488 00:13:28.055 }, 00:13:28.055 { 00:13:28.055 "name": null, 00:13:28.055 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:28.055 "is_configured": false, 00:13:28.055 "data_offset": 0, 00:13:28.055 "data_size": 63488 00:13:28.055 }, 00:13:28.055 { 00:13:28.055 "name": null, 00:13:28.055 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:28.055 "is_configured": false, 00:13:28.055 "data_offset": 0, 00:13:28.055 "data_size": 63488 00:13:28.055 }, 00:13:28.055 { 00:13:28.055 "name": "BaseBdev4", 00:13:28.055 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:28.055 "is_configured": true, 00:13:28.055 "data_offset": 2048, 00:13:28.055 "data_size": 63488 00:13:28.055 } 00:13:28.055 ] 00:13:28.055 }' 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.055 08:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.623 [2024-11-20 08:46:59.435804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.623 "name": "Existed_Raid", 00:13:28.623 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:28.623 "strip_size_kb": 64, 00:13:28.623 "state": "configuring", 00:13:28.623 "raid_level": "concat", 00:13:28.623 "superblock": true, 00:13:28.623 "num_base_bdevs": 4, 00:13:28.623 "num_base_bdevs_discovered": 3, 00:13:28.623 "num_base_bdevs_operational": 4, 00:13:28.623 "base_bdevs_list": [ 00:13:28.623 { 00:13:28.623 "name": "BaseBdev1", 00:13:28.623 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:28.623 "is_configured": true, 00:13:28.623 "data_offset": 2048, 00:13:28.623 "data_size": 63488 00:13:28.623 }, 00:13:28.623 { 00:13:28.623 "name": null, 00:13:28.623 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:28.623 "is_configured": false, 00:13:28.623 "data_offset": 0, 00:13:28.623 "data_size": 63488 00:13:28.623 }, 00:13:28.623 { 00:13:28.623 "name": "BaseBdev3", 00:13:28.623 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:28.623 "is_configured": true, 00:13:28.623 "data_offset": 2048, 00:13:28.623 "data_size": 63488 00:13:28.623 }, 00:13:28.623 { 00:13:28.623 "name": "BaseBdev4", 00:13:28.623 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:28.623 "is_configured": true, 00:13:28.623 "data_offset": 2048, 00:13:28.623 "data_size": 63488 00:13:28.623 } 00:13:28.623 ] 00:13:28.623 }' 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.623 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.189 08:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.189 [2024-11-20 08:46:59.988019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.189 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.448 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.448 "name": "Existed_Raid", 00:13:29.448 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:29.448 "strip_size_kb": 64, 00:13:29.448 "state": "configuring", 00:13:29.448 "raid_level": "concat", 00:13:29.448 "superblock": true, 00:13:29.448 "num_base_bdevs": 4, 00:13:29.448 "num_base_bdevs_discovered": 2, 00:13:29.448 "num_base_bdevs_operational": 4, 00:13:29.448 "base_bdevs_list": [ 00:13:29.448 { 00:13:29.448 "name": null, 00:13:29.448 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:29.448 "is_configured": false, 00:13:29.448 "data_offset": 0, 00:13:29.448 "data_size": 63488 00:13:29.448 }, 00:13:29.448 { 00:13:29.448 "name": null, 00:13:29.448 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:29.448 "is_configured": false, 00:13:29.448 "data_offset": 0, 00:13:29.448 "data_size": 63488 00:13:29.448 }, 00:13:29.448 { 00:13:29.448 "name": "BaseBdev3", 00:13:29.448 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:29.448 "is_configured": true, 00:13:29.448 "data_offset": 2048, 00:13:29.448 "data_size": 63488 00:13:29.448 }, 00:13:29.448 { 00:13:29.448 "name": "BaseBdev4", 00:13:29.448 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:29.448 "is_configured": true, 00:13:29.448 "data_offset": 2048, 00:13:29.448 "data_size": 63488 00:13:29.448 } 00:13:29.448 ] 00:13:29.448 }' 00:13:29.448 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.448 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.706 [2024-11-20 08:47:00.614121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.706 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.966 "name": "Existed_Raid", 00:13:29.966 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:29.966 "strip_size_kb": 64, 00:13:29.966 "state": "configuring", 00:13:29.966 "raid_level": "concat", 00:13:29.966 "superblock": true, 00:13:29.966 "num_base_bdevs": 4, 00:13:29.966 "num_base_bdevs_discovered": 3, 00:13:29.966 "num_base_bdevs_operational": 4, 00:13:29.966 "base_bdevs_list": [ 00:13:29.966 { 00:13:29.966 "name": null, 00:13:29.966 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:29.966 "is_configured": false, 00:13:29.966 "data_offset": 0, 00:13:29.966 "data_size": 63488 00:13:29.966 }, 00:13:29.966 { 00:13:29.966 "name": "BaseBdev2", 00:13:29.966 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:29.966 "is_configured": true, 00:13:29.966 "data_offset": 2048, 00:13:29.966 "data_size": 63488 00:13:29.966 }, 00:13:29.966 { 00:13:29.966 "name": "BaseBdev3", 00:13:29.966 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:29.966 "is_configured": true, 00:13:29.966 "data_offset": 2048, 00:13:29.966 "data_size": 63488 00:13:29.966 }, 00:13:29.966 { 00:13:29.966 "name": "BaseBdev4", 00:13:29.966 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:29.966 "is_configured": true, 00:13:29.966 "data_offset": 2048, 00:13:29.966 "data_size": 63488 00:13:29.966 } 00:13:29.966 ] 00:13:29.966 }' 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.966 08:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.225 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 [2024-11-20 08:47:01.220135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:30.485 NewBaseBdev 00:13:30.485 [2024-11-20 08:47:01.220615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:30.485 [2024-11-20 08:47:01.220640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:30.485 [2024-11-20 08:47:01.220965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:30.485 [2024-11-20 08:47:01.221177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:30.485 [2024-11-20 08:47:01.221201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:30.485 [2024-11-20 08:47:01.221353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 [ 00:13:30.485 { 00:13:30.485 "name": "NewBaseBdev", 00:13:30.485 "aliases": [ 00:13:30.485 "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d" 00:13:30.485 ], 00:13:30.485 "product_name": "Malloc disk", 00:13:30.485 "block_size": 512, 00:13:30.485 "num_blocks": 65536, 00:13:30.485 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:30.485 "assigned_rate_limits": { 00:13:30.485 "rw_ios_per_sec": 0, 00:13:30.485 "rw_mbytes_per_sec": 0, 00:13:30.485 "r_mbytes_per_sec": 0, 00:13:30.485 "w_mbytes_per_sec": 0 00:13:30.485 }, 00:13:30.485 "claimed": true, 00:13:30.485 "claim_type": "exclusive_write", 00:13:30.485 "zoned": false, 00:13:30.485 "supported_io_types": { 00:13:30.485 "read": true, 00:13:30.485 "write": true, 00:13:30.485 "unmap": true, 00:13:30.485 "flush": true, 00:13:30.485 "reset": true, 00:13:30.485 "nvme_admin": false, 00:13:30.485 "nvme_io": false, 00:13:30.485 "nvme_io_md": false, 00:13:30.485 "write_zeroes": true, 00:13:30.485 "zcopy": true, 00:13:30.485 "get_zone_info": false, 00:13:30.485 "zone_management": false, 00:13:30.485 "zone_append": false, 00:13:30.485 "compare": false, 00:13:30.485 "compare_and_write": false, 00:13:30.485 "abort": true, 00:13:30.485 "seek_hole": false, 00:13:30.485 "seek_data": false, 00:13:30.485 "copy": true, 00:13:30.485 "nvme_iov_md": false 00:13:30.485 }, 00:13:30.485 "memory_domains": [ 00:13:30.485 { 00:13:30.485 "dma_device_id": "system", 00:13:30.485 "dma_device_type": 1 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.485 "dma_device_type": 2 00:13:30.485 } 00:13:30.485 ], 00:13:30.485 "driver_specific": {} 00:13:30.485 } 00:13:30.485 ] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.485 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.485 "name": "Existed_Raid", 00:13:30.485 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:30.485 "strip_size_kb": 64, 00:13:30.485 "state": "online", 00:13:30.485 "raid_level": "concat", 00:13:30.485 "superblock": true, 00:13:30.485 "num_base_bdevs": 4, 00:13:30.485 "num_base_bdevs_discovered": 4, 00:13:30.485 "num_base_bdevs_operational": 4, 00:13:30.485 "base_bdevs_list": [ 00:13:30.485 { 00:13:30.485 "name": "NewBaseBdev", 00:13:30.485 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:30.485 "is_configured": true, 00:13:30.485 "data_offset": 2048, 00:13:30.485 "data_size": 63488 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "name": "BaseBdev2", 00:13:30.485 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:30.485 "is_configured": true, 00:13:30.485 "data_offset": 2048, 00:13:30.485 "data_size": 63488 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "name": "BaseBdev3", 00:13:30.485 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:30.485 "is_configured": true, 00:13:30.485 "data_offset": 2048, 00:13:30.485 "data_size": 63488 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "name": "BaseBdev4", 00:13:30.486 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:30.486 "is_configured": true, 00:13:30.486 "data_offset": 2048, 00:13:30.486 "data_size": 63488 00:13:30.486 } 00:13:30.486 ] 00:13:30.486 }' 00:13:30.486 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.486 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.053 [2024-11-20 08:47:01.768849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.053 "name": "Existed_Raid", 00:13:31.053 "aliases": [ 00:13:31.053 "35581081-ba86-4844-a862-c532269a273d" 00:13:31.053 ], 00:13:31.053 "product_name": "Raid Volume", 00:13:31.053 "block_size": 512, 00:13:31.053 "num_blocks": 253952, 00:13:31.053 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:31.053 "assigned_rate_limits": { 00:13:31.053 "rw_ios_per_sec": 0, 00:13:31.053 "rw_mbytes_per_sec": 0, 00:13:31.053 "r_mbytes_per_sec": 0, 00:13:31.053 "w_mbytes_per_sec": 0 00:13:31.053 }, 00:13:31.053 "claimed": false, 00:13:31.053 "zoned": false, 00:13:31.053 "supported_io_types": { 00:13:31.053 "read": true, 00:13:31.053 "write": true, 00:13:31.053 "unmap": true, 00:13:31.053 "flush": true, 00:13:31.053 "reset": true, 00:13:31.053 "nvme_admin": false, 00:13:31.053 "nvme_io": false, 00:13:31.053 "nvme_io_md": false, 00:13:31.053 "write_zeroes": true, 00:13:31.053 "zcopy": false, 00:13:31.053 "get_zone_info": false, 00:13:31.053 "zone_management": false, 00:13:31.053 "zone_append": false, 00:13:31.053 "compare": false, 00:13:31.053 "compare_and_write": false, 00:13:31.053 "abort": false, 00:13:31.053 "seek_hole": false, 00:13:31.053 "seek_data": false, 00:13:31.053 "copy": false, 00:13:31.053 "nvme_iov_md": false 00:13:31.053 }, 00:13:31.053 "memory_domains": [ 00:13:31.053 { 00:13:31.053 "dma_device_id": "system", 00:13:31.053 "dma_device_type": 1 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.053 "dma_device_type": 2 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "system", 00:13:31.053 "dma_device_type": 1 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.053 "dma_device_type": 2 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "system", 00:13:31.053 "dma_device_type": 1 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.053 "dma_device_type": 2 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "system", 00:13:31.053 "dma_device_type": 1 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.053 "dma_device_type": 2 00:13:31.053 } 00:13:31.053 ], 00:13:31.053 "driver_specific": { 00:13:31.053 "raid": { 00:13:31.053 "uuid": "35581081-ba86-4844-a862-c532269a273d", 00:13:31.053 "strip_size_kb": 64, 00:13:31.053 "state": "online", 00:13:31.053 "raid_level": "concat", 00:13:31.053 "superblock": true, 00:13:31.053 "num_base_bdevs": 4, 00:13:31.053 "num_base_bdevs_discovered": 4, 00:13:31.053 "num_base_bdevs_operational": 4, 00:13:31.053 "base_bdevs_list": [ 00:13:31.053 { 00:13:31.053 "name": "NewBaseBdev", 00:13:31.053 "uuid": "64fc4b6c-ccb3-4709-a462-4bfff0d3ee0d", 00:13:31.053 "is_configured": true, 00:13:31.053 "data_offset": 2048, 00:13:31.053 "data_size": 63488 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "name": "BaseBdev2", 00:13:31.053 "uuid": "009aafe2-920a-4e53-a849-025a35c9f179", 00:13:31.053 "is_configured": true, 00:13:31.053 "data_offset": 2048, 00:13:31.053 "data_size": 63488 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "name": "BaseBdev3", 00:13:31.053 "uuid": "a105ed87-aa24-4da1-b5d4-f5601c81e7e4", 00:13:31.053 "is_configured": true, 00:13:31.053 "data_offset": 2048, 00:13:31.053 "data_size": 63488 00:13:31.053 }, 00:13:31.053 { 00:13:31.053 "name": "BaseBdev4", 00:13:31.053 "uuid": "3227ba91-27f6-4a84-87a4-345274caaa8c", 00:13:31.053 "is_configured": true, 00:13:31.053 "data_offset": 2048, 00:13:31.053 "data_size": 63488 00:13:31.053 } 00:13:31.053 ] 00:13:31.053 } 00:13:31.053 } 00:13:31.053 }' 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:31.053 BaseBdev2 00:13:31.053 BaseBdev3 00:13:31.053 BaseBdev4' 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.053 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.313 08:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.314 [2024-11-20 08:47:02.156478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.314 [2024-11-20 08:47:02.156546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.314 [2024-11-20 08:47:02.156654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.314 [2024-11-20 08:47:02.156740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.314 [2024-11-20 08:47:02.156756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72061 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72061 ']' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72061 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72061 00:13:31.314 killing process with pid 72061 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72061' 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72061 00:13:31.314 [2024-11-20 08:47:02.197898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.314 08:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72061 00:13:31.880 [2024-11-20 08:47:02.571253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.252 08:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:33.252 00:13:33.252 real 0m13.250s 00:13:33.252 user 0m21.753s 00:13:33.252 sys 0m1.861s 00:13:33.252 08:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.252 ************************************ 00:13:33.252 END TEST raid_state_function_test_sb 00:13:33.252 ************************************ 00:13:33.252 08:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 08:47:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:33.252 08:47:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:33.252 08:47:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.252 08:47:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 ************************************ 00:13:33.252 START TEST raid_superblock_test 00:13:33.252 ************************************ 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72748 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72748 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72748 ']' 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.252 08:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 [2024-11-20 08:47:04.055783] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:33.253 [2024-11-20 08:47:04.055984] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:13:33.511 [2024-11-20 08:47:04.235087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.511 [2024-11-20 08:47:04.424125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.769 [2024-11-20 08:47:04.666113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.769 [2024-11-20 08:47:04.666210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:34.334 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.335 malloc1 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.335 [2024-11-20 08:47:05.212944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.335 [2024-11-20 08:47:05.213040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.335 [2024-11-20 08:47:05.213085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.335 [2024-11-20 08:47:05.213105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.335 [2024-11-20 08:47:05.216343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.335 [2024-11-20 08:47:05.216396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.335 pt1 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.335 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.592 malloc2 00:13:34.592 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.592 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.592 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.592 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.592 [2024-11-20 08:47:05.269134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.592 [2024-11-20 08:47:05.269229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.592 [2024-11-20 08:47:05.269271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.592 [2024-11-20 08:47:05.269290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.592 [2024-11-20 08:47:05.272551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.592 [2024-11-20 08:47:05.272605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.592 pt2 00:13:34.592 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.593 malloc3 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.593 [2024-11-20 08:47:05.345409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:34.593 [2024-11-20 08:47:05.345500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.593 [2024-11-20 08:47:05.345551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.593 [2024-11-20 08:47:05.345579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.593 [2024-11-20 08:47:05.349234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.593 [2024-11-20 08:47:05.349294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:34.593 pt3 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.593 malloc4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.593 [2024-11-20 08:47:05.411757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:34.593 [2024-11-20 08:47:05.412089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.593 [2024-11-20 08:47:05.412176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.593 [2024-11-20 08:47:05.412210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.593 [2024-11-20 08:47:05.415798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.593 [2024-11-20 08:47:05.415872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:34.593 pt4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.593 [2024-11-20 08:47:05.420197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.593 [2024-11-20 08:47:05.423297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.593 [2024-11-20 08:47:05.423436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:34.593 [2024-11-20 08:47:05.423573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:34.593 [2024-11-20 08:47:05.423929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.593 [2024-11-20 08:47:05.423958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:34.593 [2024-11-20 08:47:05.424432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:34.593 [2024-11-20 08:47:05.424774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.593 [2024-11-20 08:47:05.424810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.593 [2024-11-20 08:47:05.425173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.593 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.593 "name": "raid_bdev1", 00:13:34.594 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:34.594 "strip_size_kb": 64, 00:13:34.594 "state": "online", 00:13:34.594 "raid_level": "concat", 00:13:34.594 "superblock": true, 00:13:34.594 "num_base_bdevs": 4, 00:13:34.594 "num_base_bdevs_discovered": 4, 00:13:34.594 "num_base_bdevs_operational": 4, 00:13:34.594 "base_bdevs_list": [ 00:13:34.594 { 00:13:34.594 "name": "pt1", 00:13:34.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.594 "is_configured": true, 00:13:34.594 "data_offset": 2048, 00:13:34.594 "data_size": 63488 00:13:34.594 }, 00:13:34.594 { 00:13:34.594 "name": "pt2", 00:13:34.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.594 "is_configured": true, 00:13:34.594 "data_offset": 2048, 00:13:34.594 "data_size": 63488 00:13:34.594 }, 00:13:34.594 { 00:13:34.594 "name": "pt3", 00:13:34.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.594 "is_configured": true, 00:13:34.594 "data_offset": 2048, 00:13:34.594 "data_size": 63488 00:13:34.594 }, 00:13:34.594 { 00:13:34.594 "name": "pt4", 00:13:34.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.594 "is_configured": true, 00:13:34.594 "data_offset": 2048, 00:13:34.594 "data_size": 63488 00:13:34.594 } 00:13:34.594 ] 00:13:34.594 }' 00:13:34.594 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.594 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.159 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.159 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.160 [2024-11-20 08:47:05.901638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.160 "name": "raid_bdev1", 00:13:35.160 "aliases": [ 00:13:35.160 "d628e541-8ec3-48d8-ab99-12fcc68edf89" 00:13:35.160 ], 00:13:35.160 "product_name": "Raid Volume", 00:13:35.160 "block_size": 512, 00:13:35.160 "num_blocks": 253952, 00:13:35.160 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:35.160 "assigned_rate_limits": { 00:13:35.160 "rw_ios_per_sec": 0, 00:13:35.160 "rw_mbytes_per_sec": 0, 00:13:35.160 "r_mbytes_per_sec": 0, 00:13:35.160 "w_mbytes_per_sec": 0 00:13:35.160 }, 00:13:35.160 "claimed": false, 00:13:35.160 "zoned": false, 00:13:35.160 "supported_io_types": { 00:13:35.160 "read": true, 00:13:35.160 "write": true, 00:13:35.160 "unmap": true, 00:13:35.160 "flush": true, 00:13:35.160 "reset": true, 00:13:35.160 "nvme_admin": false, 00:13:35.160 "nvme_io": false, 00:13:35.160 "nvme_io_md": false, 00:13:35.160 "write_zeroes": true, 00:13:35.160 "zcopy": false, 00:13:35.160 "get_zone_info": false, 00:13:35.160 "zone_management": false, 00:13:35.160 "zone_append": false, 00:13:35.160 "compare": false, 00:13:35.160 "compare_and_write": false, 00:13:35.160 "abort": false, 00:13:35.160 "seek_hole": false, 00:13:35.160 "seek_data": false, 00:13:35.160 "copy": false, 00:13:35.160 "nvme_iov_md": false 00:13:35.160 }, 00:13:35.160 "memory_domains": [ 00:13:35.160 { 00:13:35.160 "dma_device_id": "system", 00:13:35.160 "dma_device_type": 1 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.160 "dma_device_type": 2 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "system", 00:13:35.160 "dma_device_type": 1 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.160 "dma_device_type": 2 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "system", 00:13:35.160 "dma_device_type": 1 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.160 "dma_device_type": 2 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "system", 00:13:35.160 "dma_device_type": 1 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.160 "dma_device_type": 2 00:13:35.160 } 00:13:35.160 ], 00:13:35.160 "driver_specific": { 00:13:35.160 "raid": { 00:13:35.160 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:35.160 "strip_size_kb": 64, 00:13:35.160 "state": "online", 00:13:35.160 "raid_level": "concat", 00:13:35.160 "superblock": true, 00:13:35.160 "num_base_bdevs": 4, 00:13:35.160 "num_base_bdevs_discovered": 4, 00:13:35.160 "num_base_bdevs_operational": 4, 00:13:35.160 "base_bdevs_list": [ 00:13:35.160 { 00:13:35.160 "name": "pt1", 00:13:35.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.160 "is_configured": true, 00:13:35.160 "data_offset": 2048, 00:13:35.160 "data_size": 63488 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "name": "pt2", 00:13:35.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.160 "is_configured": true, 00:13:35.160 "data_offset": 2048, 00:13:35.160 "data_size": 63488 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "name": "pt3", 00:13:35.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.160 "is_configured": true, 00:13:35.160 "data_offset": 2048, 00:13:35.160 "data_size": 63488 00:13:35.160 }, 00:13:35.160 { 00:13:35.160 "name": "pt4", 00:13:35.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.160 "is_configured": true, 00:13:35.160 "data_offset": 2048, 00:13:35.160 "data_size": 63488 00:13:35.160 } 00:13:35.160 ] 00:13:35.160 } 00:13:35.160 } 00:13:35.160 }' 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.160 pt2 00:13:35.160 pt3 00:13:35.160 pt4' 00:13:35.160 08:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.160 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.419 [2024-11-20 08:47:06.245674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d628e541-8ec3-48d8-ab99-12fcc68edf89 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d628e541-8ec3-48d8-ab99-12fcc68edf89 ']' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.419 [2024-11-20 08:47:06.293315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.419 [2024-11-20 08:47:06.293349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.419 [2024-11-20 08:47:06.293451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.419 [2024-11-20 08:47:06.293568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.419 [2024-11-20 08:47:06.293590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:35.419 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.677 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 [2024-11-20 08:47:06.445365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:35.678 [2024-11-20 08:47:06.447932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:35.678 [2024-11-20 08:47:06.448003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:35.678 [2024-11-20 08:47:06.448057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:35.678 [2024-11-20 08:47:06.448136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:35.678 [2024-11-20 08:47:06.448231] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:35.678 [2024-11-20 08:47:06.448267] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:35.678 [2024-11-20 08:47:06.448306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:35.678 [2024-11-20 08:47:06.448328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.678 [2024-11-20 08:47:06.448344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:35.678 request: 00:13:35.678 { 00:13:35.678 "name": "raid_bdev1", 00:13:35.678 "raid_level": "concat", 00:13:35.678 "base_bdevs": [ 00:13:35.678 "malloc1", 00:13:35.678 "malloc2", 00:13:35.678 "malloc3", 00:13:35.678 "malloc4" 00:13:35.678 ], 00:13:35.678 "strip_size_kb": 64, 00:13:35.678 "superblock": false, 00:13:35.678 "method": "bdev_raid_create", 00:13:35.678 "req_id": 1 00:13:35.678 } 00:13:35.678 Got JSON-RPC error response 00:13:35.678 response: 00:13:35.678 { 00:13:35.678 "code": -17, 00:13:35.678 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:35.678 } 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 [2024-11-20 08:47:06.513369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:35.678 [2024-11-20 08:47:06.513447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.678 [2024-11-20 08:47:06.513475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:35.678 [2024-11-20 08:47:06.513492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.678 [2024-11-20 08:47:06.516436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.678 [2024-11-20 08:47:06.516643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:35.678 [2024-11-20 08:47:06.516765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:35.678 [2024-11-20 08:47:06.516848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:35.678 pt1 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.678 "name": "raid_bdev1", 00:13:35.678 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:35.678 "strip_size_kb": 64, 00:13:35.678 "state": "configuring", 00:13:35.678 "raid_level": "concat", 00:13:35.678 "superblock": true, 00:13:35.678 "num_base_bdevs": 4, 00:13:35.678 "num_base_bdevs_discovered": 1, 00:13:35.678 "num_base_bdevs_operational": 4, 00:13:35.678 "base_bdevs_list": [ 00:13:35.678 { 00:13:35.678 "name": "pt1", 00:13:35.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.678 "is_configured": true, 00:13:35.678 "data_offset": 2048, 00:13:35.678 "data_size": 63488 00:13:35.678 }, 00:13:35.678 { 00:13:35.678 "name": null, 00:13:35.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.678 "is_configured": false, 00:13:35.678 "data_offset": 2048, 00:13:35.678 "data_size": 63488 00:13:35.678 }, 00:13:35.678 { 00:13:35.678 "name": null, 00:13:35.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.678 "is_configured": false, 00:13:35.678 "data_offset": 2048, 00:13:35.678 "data_size": 63488 00:13:35.678 }, 00:13:35.678 { 00:13:35.678 "name": null, 00:13:35.678 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.678 "is_configured": false, 00:13:35.678 "data_offset": 2048, 00:13:35.678 "data_size": 63488 00:13:35.678 } 00:13:35.678 ] 00:13:35.678 }' 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.678 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.245 [2024-11-20 08:47:06.969516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.245 [2024-11-20 08:47:06.969603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.245 [2024-11-20 08:47:06.969633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:36.245 [2024-11-20 08:47:06.969650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.245 [2024-11-20 08:47:06.970214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.245 [2024-11-20 08:47:06.970255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.245 [2024-11-20 08:47:06.970365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.245 [2024-11-20 08:47:06.970411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.245 pt2 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.245 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.246 [2024-11-20 08:47:06.977517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.246 08:47:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.246 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.246 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.246 "name": "raid_bdev1", 00:13:36.246 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:36.246 "strip_size_kb": 64, 00:13:36.246 "state": "configuring", 00:13:36.246 "raid_level": "concat", 00:13:36.246 "superblock": true, 00:13:36.246 "num_base_bdevs": 4, 00:13:36.246 "num_base_bdevs_discovered": 1, 00:13:36.246 "num_base_bdevs_operational": 4, 00:13:36.246 "base_bdevs_list": [ 00:13:36.246 { 00:13:36.246 "name": "pt1", 00:13:36.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.246 "is_configured": true, 00:13:36.246 "data_offset": 2048, 00:13:36.246 "data_size": 63488 00:13:36.246 }, 00:13:36.246 { 00:13:36.246 "name": null, 00:13:36.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.246 "is_configured": false, 00:13:36.246 "data_offset": 0, 00:13:36.246 "data_size": 63488 00:13:36.246 }, 00:13:36.246 { 00:13:36.246 "name": null, 00:13:36.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.246 "is_configured": false, 00:13:36.246 "data_offset": 2048, 00:13:36.246 "data_size": 63488 00:13:36.246 }, 00:13:36.246 { 00:13:36.246 "name": null, 00:13:36.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.246 "is_configured": false, 00:13:36.246 "data_offset": 2048, 00:13:36.246 "data_size": 63488 00:13:36.246 } 00:13:36.246 ] 00:13:36.246 }' 00:13:36.246 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.246 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.814 [2024-11-20 08:47:07.517693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.814 [2024-11-20 08:47:07.517792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.814 [2024-11-20 08:47:07.517821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:36.814 [2024-11-20 08:47:07.517836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.814 [2024-11-20 08:47:07.518396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.814 [2024-11-20 08:47:07.518422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.814 [2024-11-20 08:47:07.518535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.814 [2024-11-20 08:47:07.518581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.814 pt2 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.814 [2024-11-20 08:47:07.529653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:36.814 [2024-11-20 08:47:07.529720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.814 [2024-11-20 08:47:07.529755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:36.814 [2024-11-20 08:47:07.529772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.814 [2024-11-20 08:47:07.530319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.814 [2024-11-20 08:47:07.530360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:36.814 [2024-11-20 08:47:07.530459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:36.814 [2024-11-20 08:47:07.530491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:36.814 pt3 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.814 [2024-11-20 08:47:07.541685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:36.814 [2024-11-20 08:47:07.541789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.814 [2024-11-20 08:47:07.541823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:36.814 [2024-11-20 08:47:07.541838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.814 [2024-11-20 08:47:07.542432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.814 [2024-11-20 08:47:07.542473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:36.814 [2024-11-20 08:47:07.542584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:36.814 [2024-11-20 08:47:07.542617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:36.814 [2024-11-20 08:47:07.542796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:36.814 [2024-11-20 08:47:07.542812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:36.814 [2024-11-20 08:47:07.543136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:36.814 [2024-11-20 08:47:07.543353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:36.814 [2024-11-20 08:47:07.543375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:36.814 [2024-11-20 08:47:07.543537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.814 pt4 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.814 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.815 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.815 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.815 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.815 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.815 "name": "raid_bdev1", 00:13:36.815 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:36.815 "strip_size_kb": 64, 00:13:36.815 "state": "online", 00:13:36.815 "raid_level": "concat", 00:13:36.815 "superblock": true, 00:13:36.815 "num_base_bdevs": 4, 00:13:36.815 "num_base_bdevs_discovered": 4, 00:13:36.815 "num_base_bdevs_operational": 4, 00:13:36.815 "base_bdevs_list": [ 00:13:36.815 { 00:13:36.815 "name": "pt1", 00:13:36.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.815 "is_configured": true, 00:13:36.815 "data_offset": 2048, 00:13:36.815 "data_size": 63488 00:13:36.815 }, 00:13:36.815 { 00:13:36.815 "name": "pt2", 00:13:36.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.815 "is_configured": true, 00:13:36.815 "data_offset": 2048, 00:13:36.815 "data_size": 63488 00:13:36.815 }, 00:13:36.815 { 00:13:36.815 "name": "pt3", 00:13:36.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.815 "is_configured": true, 00:13:36.815 "data_offset": 2048, 00:13:36.815 "data_size": 63488 00:13:36.815 }, 00:13:36.815 { 00:13:36.815 "name": "pt4", 00:13:36.815 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.815 "is_configured": true, 00:13:36.815 "data_offset": 2048, 00:13:36.815 "data_size": 63488 00:13:36.815 } 00:13:36.815 ] 00:13:36.815 }' 00:13:36.815 08:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.815 08:47:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.382 [2024-11-20 08:47:08.042275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.382 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.382 "name": "raid_bdev1", 00:13:37.382 "aliases": [ 00:13:37.382 "d628e541-8ec3-48d8-ab99-12fcc68edf89" 00:13:37.382 ], 00:13:37.382 "product_name": "Raid Volume", 00:13:37.382 "block_size": 512, 00:13:37.382 "num_blocks": 253952, 00:13:37.382 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:37.382 "assigned_rate_limits": { 00:13:37.382 "rw_ios_per_sec": 0, 00:13:37.382 "rw_mbytes_per_sec": 0, 00:13:37.382 "r_mbytes_per_sec": 0, 00:13:37.382 "w_mbytes_per_sec": 0 00:13:37.382 }, 00:13:37.382 "claimed": false, 00:13:37.382 "zoned": false, 00:13:37.382 "supported_io_types": { 00:13:37.382 "read": true, 00:13:37.382 "write": true, 00:13:37.382 "unmap": true, 00:13:37.382 "flush": true, 00:13:37.382 "reset": true, 00:13:37.382 "nvme_admin": false, 00:13:37.382 "nvme_io": false, 00:13:37.382 "nvme_io_md": false, 00:13:37.382 "write_zeroes": true, 00:13:37.382 "zcopy": false, 00:13:37.382 "get_zone_info": false, 00:13:37.382 "zone_management": false, 00:13:37.382 "zone_append": false, 00:13:37.382 "compare": false, 00:13:37.382 "compare_and_write": false, 00:13:37.382 "abort": false, 00:13:37.382 "seek_hole": false, 00:13:37.382 "seek_data": false, 00:13:37.382 "copy": false, 00:13:37.382 "nvme_iov_md": false 00:13:37.382 }, 00:13:37.382 "memory_domains": [ 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 } 00:13:37.382 ], 00:13:37.382 "driver_specific": { 00:13:37.382 "raid": { 00:13:37.382 "uuid": "d628e541-8ec3-48d8-ab99-12fcc68edf89", 00:13:37.382 "strip_size_kb": 64, 00:13:37.382 "state": "online", 00:13:37.382 "raid_level": "concat", 00:13:37.382 "superblock": true, 00:13:37.382 "num_base_bdevs": 4, 00:13:37.382 "num_base_bdevs_discovered": 4, 00:13:37.382 "num_base_bdevs_operational": 4, 00:13:37.382 "base_bdevs_list": [ 00:13:37.382 { 00:13:37.382 "name": "pt1", 00:13:37.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.382 "is_configured": true, 00:13:37.382 "data_offset": 2048, 00:13:37.382 "data_size": 63488 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "name": "pt2", 00:13:37.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 }, 00:13:37.383 { 00:13:37.383 "name": "pt3", 00:13:37.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 }, 00:13:37.383 { 00:13:37.383 "name": "pt4", 00:13:37.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 } 00:13:37.383 ] 00:13:37.383 } 00:13:37.383 } 00:13:37.383 }' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:37.383 pt2 00:13:37.383 pt3 00:13:37.383 pt4' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.383 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.641 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:37.642 [2024-11-20 08:47:08.422354] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d628e541-8ec3-48d8-ab99-12fcc68edf89 '!=' d628e541-8ec3-48d8-ab99-12fcc68edf89 ']' 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72748 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72748 ']' 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72748 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72748 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.642 killing process with pid 72748 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72748' 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72748 00:13:37.642 [2024-11-20 08:47:08.503754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.642 08:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72748 00:13:37.642 [2024-11-20 08:47:08.503894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.642 [2024-11-20 08:47:08.504007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.642 [2024-11-20 08:47:08.504023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:38.208 [2024-11-20 08:47:08.858936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.144 08:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:39.144 00:13:39.144 real 0m5.964s 00:13:39.144 user 0m8.916s 00:13:39.144 sys 0m0.912s 00:13:39.144 08:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.144 ************************************ 00:13:39.144 END TEST raid_superblock_test 00:13:39.144 ************************************ 00:13:39.144 08:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 08:47:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:39.144 08:47:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:39.144 08:47:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.144 08:47:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 ************************************ 00:13:39.144 START TEST raid_read_error_test 00:13:39.144 ************************************ 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6wWXp3hS7s 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73024 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73024 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73024 ']' 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.144 08:47:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.403 [2024-11-20 08:47:10.073407] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:39.403 [2024-11-20 08:47:10.074442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73024 ] 00:13:39.403 [2024-11-20 08:47:10.265651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.662 [2024-11-20 08:47:10.395911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.920 [2024-11-20 08:47:10.598766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.920 [2024-11-20 08:47:10.598807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.178 BaseBdev1_malloc 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.178 true 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.178 [2024-11-20 08:47:11.084409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:40.178 [2024-11-20 08:47:11.084477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.178 [2024-11-20 08:47:11.084506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:40.178 [2024-11-20 08:47:11.084535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.178 [2024-11-20 08:47:11.087263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.178 [2024-11-20 08:47:11.087312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:40.178 BaseBdev1 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.178 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.437 BaseBdev2_malloc 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.437 true 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.437 [2024-11-20 08:47:11.140130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:40.437 [2024-11-20 08:47:11.140213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.437 [2024-11-20 08:47:11.140251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:40.437 [2024-11-20 08:47:11.140269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.437 [2024-11-20 08:47:11.142946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.437 [2024-11-20 08:47:11.142996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:40.437 BaseBdev2 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.437 BaseBdev3_malloc 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.437 true 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.437 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.437 [2024-11-20 08:47:11.205287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:40.437 [2024-11-20 08:47:11.205352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.437 [2024-11-20 08:47:11.205379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:40.437 [2024-11-20 08:47:11.205397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.438 [2024-11-20 08:47:11.208131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.438 [2024-11-20 08:47:11.208197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:40.438 BaseBdev3 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.438 BaseBdev4_malloc 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.438 true 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.438 [2024-11-20 08:47:11.260520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:40.438 [2024-11-20 08:47:11.260582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.438 [2024-11-20 08:47:11.260609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:40.438 [2024-11-20 08:47:11.260627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.438 [2024-11-20 08:47:11.263337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.438 [2024-11-20 08:47:11.263407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:40.438 BaseBdev4 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.438 [2024-11-20 08:47:11.268593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.438 [2024-11-20 08:47:11.270962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.438 [2024-11-20 08:47:11.271076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.438 [2024-11-20 08:47:11.271206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:40.438 [2024-11-20 08:47:11.271503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:40.438 [2024-11-20 08:47:11.271527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:40.438 [2024-11-20 08:47:11.271822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:40.438 [2024-11-20 08:47:11.272045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:40.438 [2024-11-20 08:47:11.272066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:40.438 [2024-11-20 08:47:11.272276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.438 "name": "raid_bdev1", 00:13:40.438 "uuid": "9f0786d4-9986-45e4-a7f3-2d782f93132c", 00:13:40.438 "strip_size_kb": 64, 00:13:40.438 "state": "online", 00:13:40.438 "raid_level": "concat", 00:13:40.438 "superblock": true, 00:13:40.438 "num_base_bdevs": 4, 00:13:40.438 "num_base_bdevs_discovered": 4, 00:13:40.438 "num_base_bdevs_operational": 4, 00:13:40.438 "base_bdevs_list": [ 00:13:40.438 { 00:13:40.438 "name": "BaseBdev1", 00:13:40.438 "uuid": "7612621c-34e4-593f-8a3b-357d6da69f74", 00:13:40.438 "is_configured": true, 00:13:40.438 "data_offset": 2048, 00:13:40.438 "data_size": 63488 00:13:40.438 }, 00:13:40.438 { 00:13:40.438 "name": "BaseBdev2", 00:13:40.438 "uuid": "7cc6e2f0-5b80-5afc-8a6e-9d86d00049d7", 00:13:40.438 "is_configured": true, 00:13:40.438 "data_offset": 2048, 00:13:40.438 "data_size": 63488 00:13:40.438 }, 00:13:40.438 { 00:13:40.438 "name": "BaseBdev3", 00:13:40.438 "uuid": "fcb6bc64-aea5-5e30-8664-a5591f42f6c1", 00:13:40.438 "is_configured": true, 00:13:40.438 "data_offset": 2048, 00:13:40.438 "data_size": 63488 00:13:40.438 }, 00:13:40.438 { 00:13:40.438 "name": "BaseBdev4", 00:13:40.438 "uuid": "fafdcc80-ce2b-5d4c-895c-9d5321c2fe04", 00:13:40.438 "is_configured": true, 00:13:40.438 "data_offset": 2048, 00:13:40.438 "data_size": 63488 00:13:40.438 } 00:13:40.438 ] 00:13:40.438 }' 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.438 08:47:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.006 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:41.006 08:47:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.006 [2024-11-20 08:47:11.918192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.942 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.943 "name": "raid_bdev1", 00:13:41.943 "uuid": "9f0786d4-9986-45e4-a7f3-2d782f93132c", 00:13:41.943 "strip_size_kb": 64, 00:13:41.943 "state": "online", 00:13:41.943 "raid_level": "concat", 00:13:41.943 "superblock": true, 00:13:41.943 "num_base_bdevs": 4, 00:13:41.943 "num_base_bdevs_discovered": 4, 00:13:41.943 "num_base_bdevs_operational": 4, 00:13:41.943 "base_bdevs_list": [ 00:13:41.943 { 00:13:41.943 "name": "BaseBdev1", 00:13:41.943 "uuid": "7612621c-34e4-593f-8a3b-357d6da69f74", 00:13:41.943 "is_configured": true, 00:13:41.943 "data_offset": 2048, 00:13:41.943 "data_size": 63488 00:13:41.943 }, 00:13:41.943 { 00:13:41.943 "name": "BaseBdev2", 00:13:41.943 "uuid": "7cc6e2f0-5b80-5afc-8a6e-9d86d00049d7", 00:13:41.943 "is_configured": true, 00:13:41.943 "data_offset": 2048, 00:13:41.943 "data_size": 63488 00:13:41.943 }, 00:13:41.943 { 00:13:41.943 "name": "BaseBdev3", 00:13:41.943 "uuid": "fcb6bc64-aea5-5e30-8664-a5591f42f6c1", 00:13:41.943 "is_configured": true, 00:13:41.943 "data_offset": 2048, 00:13:41.943 "data_size": 63488 00:13:41.943 }, 00:13:41.943 { 00:13:41.943 "name": "BaseBdev4", 00:13:41.943 "uuid": "fafdcc80-ce2b-5d4c-895c-9d5321c2fe04", 00:13:41.943 "is_configured": true, 00:13:41.943 "data_offset": 2048, 00:13:41.943 "data_size": 63488 00:13:41.943 } 00:13:41.943 ] 00:13:41.943 }' 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.943 08:47:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.510 [2024-11-20 08:47:13.333733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.510 [2024-11-20 08:47:13.333776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.510 [2024-11-20 08:47:13.337042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.510 [2024-11-20 08:47:13.337123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.510 [2024-11-20 08:47:13.337203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.510 [2024-11-20 08:47:13.337228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:42.510 { 00:13:42.510 "results": [ 00:13:42.510 { 00:13:42.510 "job": "raid_bdev1", 00:13:42.510 "core_mask": "0x1", 00:13:42.510 "workload": "randrw", 00:13:42.510 "percentage": 50, 00:13:42.510 "status": "finished", 00:13:42.510 "queue_depth": 1, 00:13:42.510 "io_size": 131072, 00:13:42.510 "runtime": 1.412993, 00:13:42.510 "iops": 10195.379594944914, 00:13:42.510 "mibps": 1274.4224493681143, 00:13:42.510 "io_failed": 1, 00:13:42.510 "io_timeout": 0, 00:13:42.510 "avg_latency_us": 136.68999791767888, 00:13:42.510 "min_latency_us": 39.09818181818182, 00:13:42.510 "max_latency_us": 1980.9745454545455 00:13:42.510 } 00:13:42.510 ], 00:13:42.510 "core_count": 1 00:13:42.510 } 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73024 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73024 ']' 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73024 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73024 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.510 killing process with pid 73024 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73024' 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73024 00:13:42.510 [2024-11-20 08:47:13.376307] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.510 08:47:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73024 00:13:42.769 [2024-11-20 08:47:13.669227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6wWXp3hS7s 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:44.145 00:13:44.145 real 0m4.823s 00:13:44.145 user 0m5.944s 00:13:44.145 sys 0m0.611s 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.145 ************************************ 00:13:44.145 END TEST raid_read_error_test 00:13:44.145 ************************************ 00:13:44.145 08:47:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 08:47:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:44.145 08:47:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:44.145 08:47:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.145 08:47:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 ************************************ 00:13:44.145 START TEST raid_write_error_test 00:13:44.145 ************************************ 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cCbB1qz1JT 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73171 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73171 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73171 ']' 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.145 08:47:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 [2024-11-20 08:47:14.952446] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:44.145 [2024-11-20 08:47:14.952891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73171 ] 00:13:44.403 [2024-11-20 08:47:15.143605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.403 [2024-11-20 08:47:15.296945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.662 [2024-11-20 08:47:15.530366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.662 [2024-11-20 08:47:15.530424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.289 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.289 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.289 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.289 08:47:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.289 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.289 08:47:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.289 BaseBdev1_malloc 00:13:45.289 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.289 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:45.289 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.289 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.289 true 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 [2024-11-20 08:47:16.053998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:45.290 [2024-11-20 08:47:16.054079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.290 [2024-11-20 08:47:16.054108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:45.290 [2024-11-20 08:47:16.054127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.290 [2024-11-20 08:47:16.056896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.290 [2024-11-20 08:47:16.057107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.290 BaseBdev1 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 BaseBdev2_malloc 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 true 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 [2024-11-20 08:47:16.113274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:45.290 [2024-11-20 08:47:16.113493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.290 [2024-11-20 08:47:16.113528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:45.290 [2024-11-20 08:47:16.113548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.290 [2024-11-20 08:47:16.116332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.290 [2024-11-20 08:47:16.116382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.290 BaseBdev2 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 BaseBdev3_malloc 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 true 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.290 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.290 [2024-11-20 08:47:16.180942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:45.290 [2024-11-20 08:47:16.181188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.290 [2024-11-20 08:47:16.181225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:45.290 [2024-11-20 08:47:16.181245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.290 [2024-11-20 08:47:16.184059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.290 [2024-11-20 08:47:16.184110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:45.564 BaseBdev3 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.564 BaseBdev4_malloc 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.564 true 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.564 [2024-11-20 08:47:16.235598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:45.564 [2024-11-20 08:47:16.235674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.564 [2024-11-20 08:47:16.235701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:45.564 [2024-11-20 08:47:16.235719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.564 [2024-11-20 08:47:16.238472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.564 [2024-11-20 08:47:16.238554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:45.564 BaseBdev4 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:45.564 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.565 [2024-11-20 08:47:16.243673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.565 [2024-11-20 08:47:16.246337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.565 [2024-11-20 08:47:16.246582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.565 [2024-11-20 08:47:16.246805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.565 [2024-11-20 08:47:16.247255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:45.565 [2024-11-20 08:47:16.247393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:45.565 [2024-11-20 08:47:16.247757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:45.565 [2024-11-20 08:47:16.248111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:45.565 [2024-11-20 08:47:16.248261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:45.565 [2024-11-20 08:47:16.248704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.565 "name": "raid_bdev1", 00:13:45.565 "uuid": "f5936b6f-7add-4d15-be86-9429c6d14ba0", 00:13:45.565 "strip_size_kb": 64, 00:13:45.565 "state": "online", 00:13:45.565 "raid_level": "concat", 00:13:45.565 "superblock": true, 00:13:45.565 "num_base_bdevs": 4, 00:13:45.565 "num_base_bdevs_discovered": 4, 00:13:45.565 "num_base_bdevs_operational": 4, 00:13:45.565 "base_bdevs_list": [ 00:13:45.565 { 00:13:45.565 "name": "BaseBdev1", 00:13:45.565 "uuid": "119c90b9-ebf4-52cf-b019-dc00daaeb208", 00:13:45.565 "is_configured": true, 00:13:45.565 "data_offset": 2048, 00:13:45.565 "data_size": 63488 00:13:45.565 }, 00:13:45.565 { 00:13:45.565 "name": "BaseBdev2", 00:13:45.565 "uuid": "f0cf0e36-6d52-5cfd-a8f8-d0b5cf022c36", 00:13:45.565 "is_configured": true, 00:13:45.565 "data_offset": 2048, 00:13:45.565 "data_size": 63488 00:13:45.565 }, 00:13:45.565 { 00:13:45.565 "name": "BaseBdev3", 00:13:45.565 "uuid": "3b3e014f-a063-5089-83a3-ebf2a7b99954", 00:13:45.565 "is_configured": true, 00:13:45.565 "data_offset": 2048, 00:13:45.565 "data_size": 63488 00:13:45.565 }, 00:13:45.565 { 00:13:45.565 "name": "BaseBdev4", 00:13:45.565 "uuid": "39a8b22b-a80c-5c2e-8142-112c31b59a47", 00:13:45.565 "is_configured": true, 00:13:45.565 "data_offset": 2048, 00:13:45.565 "data_size": 63488 00:13:45.565 } 00:13:45.565 ] 00:13:45.565 }' 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.565 08:47:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.133 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:46.133 08:47:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.133 [2024-11-20 08:47:16.878216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:47.072 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.073 "name": "raid_bdev1", 00:13:47.073 "uuid": "f5936b6f-7add-4d15-be86-9429c6d14ba0", 00:13:47.073 "strip_size_kb": 64, 00:13:47.073 "state": "online", 00:13:47.073 "raid_level": "concat", 00:13:47.073 "superblock": true, 00:13:47.073 "num_base_bdevs": 4, 00:13:47.073 "num_base_bdevs_discovered": 4, 00:13:47.073 "num_base_bdevs_operational": 4, 00:13:47.073 "base_bdevs_list": [ 00:13:47.073 { 00:13:47.073 "name": "BaseBdev1", 00:13:47.073 "uuid": "119c90b9-ebf4-52cf-b019-dc00daaeb208", 00:13:47.073 "is_configured": true, 00:13:47.073 "data_offset": 2048, 00:13:47.073 "data_size": 63488 00:13:47.073 }, 00:13:47.073 { 00:13:47.073 "name": "BaseBdev2", 00:13:47.073 "uuid": "f0cf0e36-6d52-5cfd-a8f8-d0b5cf022c36", 00:13:47.073 "is_configured": true, 00:13:47.073 "data_offset": 2048, 00:13:47.073 "data_size": 63488 00:13:47.073 }, 00:13:47.073 { 00:13:47.073 "name": "BaseBdev3", 00:13:47.073 "uuid": "3b3e014f-a063-5089-83a3-ebf2a7b99954", 00:13:47.073 "is_configured": true, 00:13:47.073 "data_offset": 2048, 00:13:47.073 "data_size": 63488 00:13:47.073 }, 00:13:47.073 { 00:13:47.073 "name": "BaseBdev4", 00:13:47.073 "uuid": "39a8b22b-a80c-5c2e-8142-112c31b59a47", 00:13:47.073 "is_configured": true, 00:13:47.073 "data_offset": 2048, 00:13:47.073 "data_size": 63488 00:13:47.073 } 00:13:47.073 ] 00:13:47.073 }' 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.073 08:47:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.641 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.641 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.641 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.641 [2024-11-20 08:47:18.313135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.641 [2024-11-20 08:47:18.313354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.641 [2024-11-20 08:47:18.316737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.641 [2024-11-20 08:47:18.316943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.642 [2024-11-20 08:47:18.317017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.642 [2024-11-20 08:47:18.317041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:47.642 { 00:13:47.642 "results": [ 00:13:47.642 { 00:13:47.642 "job": "raid_bdev1", 00:13:47.642 "core_mask": "0x1", 00:13:47.642 "workload": "randrw", 00:13:47.642 "percentage": 50, 00:13:47.642 "status": "finished", 00:13:47.642 "queue_depth": 1, 00:13:47.642 "io_size": 131072, 00:13:47.642 "runtime": 1.432521, 00:13:47.642 "iops": 11009.96076148273, 00:13:47.642 "mibps": 1376.2450951853411, 00:13:47.642 "io_failed": 1, 00:13:47.642 "io_timeout": 0, 00:13:47.642 "avg_latency_us": 126.76444879915621, 00:13:47.642 "min_latency_us": 38.63272727272727, 00:13:47.642 "max_latency_us": 1802.24 00:13:47.642 } 00:13:47.642 ], 00:13:47.642 "core_count": 1 00:13:47.642 } 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73171 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73171 ']' 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73171 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73171 00:13:47.642 killing process with pid 73171 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73171' 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73171 00:13:47.642 [2024-11-20 08:47:18.352686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.642 08:47:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73171 00:13:47.901 [2024-11-20 08:47:18.632873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cCbB1qz1JT 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:48.836 00:13:48.836 real 0m4.883s 00:13:48.836 user 0m6.100s 00:13:48.836 sys 0m0.580s 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.836 ************************************ 00:13:48.836 END TEST raid_write_error_test 00:13:48.836 ************************************ 00:13:48.836 08:47:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.095 08:47:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:49.095 08:47:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:49.095 08:47:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.095 08:47:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.095 08:47:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.095 ************************************ 00:13:49.095 START TEST raid_state_function_test 00:13:49.095 ************************************ 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:49.095 Process raid pid: 73313 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73313 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73313' 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73313 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73313 ']' 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.095 08:47:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.095 [2024-11-20 08:47:19.888561] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:13:49.095 [2024-11-20 08:47:19.889048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.355 [2024-11-20 08:47:20.070306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.355 [2024-11-20 08:47:20.199497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.620 [2024-11-20 08:47:20.406242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.620 [2024-11-20 08:47:20.406525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.883 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.883 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:49.883 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:49.883 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.883 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 [2024-11-20 08:47:20.802444] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.142 [2024-11-20 08:47:20.802726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.142 [2024-11-20 08:47:20.802754] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.142 [2024-11-20 08:47:20.802773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.142 [2024-11-20 08:47:20.802783] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.142 [2024-11-20 08:47:20.802797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.142 [2024-11-20 08:47:20.802807] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.142 [2024-11-20 08:47:20.802820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.142 "name": "Existed_Raid", 00:13:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.142 "strip_size_kb": 0, 00:13:50.142 "state": "configuring", 00:13:50.142 "raid_level": "raid1", 00:13:50.142 "superblock": false, 00:13:50.142 "num_base_bdevs": 4, 00:13:50.142 "num_base_bdevs_discovered": 0, 00:13:50.142 "num_base_bdevs_operational": 4, 00:13:50.142 "base_bdevs_list": [ 00:13:50.142 { 00:13:50.142 "name": "BaseBdev1", 00:13:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.142 "is_configured": false, 00:13:50.142 "data_offset": 0, 00:13:50.142 "data_size": 0 00:13:50.142 }, 00:13:50.142 { 00:13:50.142 "name": "BaseBdev2", 00:13:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.142 "is_configured": false, 00:13:50.142 "data_offset": 0, 00:13:50.142 "data_size": 0 00:13:50.142 }, 00:13:50.142 { 00:13:50.142 "name": "BaseBdev3", 00:13:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.142 "is_configured": false, 00:13:50.142 "data_offset": 0, 00:13:50.142 "data_size": 0 00:13:50.142 }, 00:13:50.142 { 00:13:50.142 "name": "BaseBdev4", 00:13:50.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.142 "is_configured": false, 00:13:50.142 "data_offset": 0, 00:13:50.142 "data_size": 0 00:13:50.142 } 00:13:50.142 ] 00:13:50.142 }' 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.142 08:47:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.401 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.401 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.401 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.401 [2024-11-20 08:47:21.314560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.401 [2024-11-20 08:47:21.314623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:50.699 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.699 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.699 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.699 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 [2024-11-20 08:47:21.322538] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.700 [2024-11-20 08:47:21.322590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.700 [2024-11-20 08:47:21.322604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.700 [2024-11-20 08:47:21.322620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.700 [2024-11-20 08:47:21.322630] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.700 [2024-11-20 08:47:21.322643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.700 [2024-11-20 08:47:21.322652] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.700 [2024-11-20 08:47:21.322666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [2024-11-20 08:47:21.368570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.700 BaseBdev1 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [ 00:13:50.700 { 00:13:50.700 "name": "BaseBdev1", 00:13:50.700 "aliases": [ 00:13:50.700 "68807dc5-07a4-4044-8aeb-fcac1068f5cb" 00:13:50.700 ], 00:13:50.700 "product_name": "Malloc disk", 00:13:50.700 "block_size": 512, 00:13:50.700 "num_blocks": 65536, 00:13:50.700 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:50.700 "assigned_rate_limits": { 00:13:50.700 "rw_ios_per_sec": 0, 00:13:50.700 "rw_mbytes_per_sec": 0, 00:13:50.700 "r_mbytes_per_sec": 0, 00:13:50.700 "w_mbytes_per_sec": 0 00:13:50.700 }, 00:13:50.700 "claimed": true, 00:13:50.700 "claim_type": "exclusive_write", 00:13:50.700 "zoned": false, 00:13:50.700 "supported_io_types": { 00:13:50.700 "read": true, 00:13:50.700 "write": true, 00:13:50.700 "unmap": true, 00:13:50.700 "flush": true, 00:13:50.700 "reset": true, 00:13:50.700 "nvme_admin": false, 00:13:50.700 "nvme_io": false, 00:13:50.700 "nvme_io_md": false, 00:13:50.700 "write_zeroes": true, 00:13:50.700 "zcopy": true, 00:13:50.700 "get_zone_info": false, 00:13:50.700 "zone_management": false, 00:13:50.700 "zone_append": false, 00:13:50.700 "compare": false, 00:13:50.700 "compare_and_write": false, 00:13:50.700 "abort": true, 00:13:50.700 "seek_hole": false, 00:13:50.700 "seek_data": false, 00:13:50.700 "copy": true, 00:13:50.700 "nvme_iov_md": false 00:13:50.700 }, 00:13:50.700 "memory_domains": [ 00:13:50.700 { 00:13:50.700 "dma_device_id": "system", 00:13:50.700 "dma_device_type": 1 00:13:50.700 }, 00:13:50.700 { 00:13:50.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.700 "dma_device_type": 2 00:13:50.700 } 00:13:50.700 ], 00:13:50.700 "driver_specific": {} 00:13:50.700 } 00:13:50.700 ] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.700 "name": "Existed_Raid", 00:13:50.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.700 "strip_size_kb": 0, 00:13:50.700 "state": "configuring", 00:13:50.700 "raid_level": "raid1", 00:13:50.700 "superblock": false, 00:13:50.700 "num_base_bdevs": 4, 00:13:50.700 "num_base_bdevs_discovered": 1, 00:13:50.700 "num_base_bdevs_operational": 4, 00:13:50.700 "base_bdevs_list": [ 00:13:50.700 { 00:13:50.700 "name": "BaseBdev1", 00:13:50.700 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:50.700 "is_configured": true, 00:13:50.700 "data_offset": 0, 00:13:50.700 "data_size": 65536 00:13:50.700 }, 00:13:50.700 { 00:13:50.700 "name": "BaseBdev2", 00:13:50.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.700 "is_configured": false, 00:13:50.700 "data_offset": 0, 00:13:50.700 "data_size": 0 00:13:50.700 }, 00:13:50.700 { 00:13:50.700 "name": "BaseBdev3", 00:13:50.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.700 "is_configured": false, 00:13:50.700 "data_offset": 0, 00:13:50.700 "data_size": 0 00:13:50.700 }, 00:13:50.700 { 00:13:50.700 "name": "BaseBdev4", 00:13:50.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.700 "is_configured": false, 00:13:50.700 "data_offset": 0, 00:13:50.700 "data_size": 0 00:13:50.700 } 00:13:50.700 ] 00:13:50.700 }' 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.700 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.270 [2024-11-20 08:47:21.916803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:51.270 [2024-11-20 08:47:21.917003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.270 [2024-11-20 08:47:21.924865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.270 [2024-11-20 08:47:21.927296] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.270 [2024-11-20 08:47:21.927477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.270 [2024-11-20 08:47:21.927506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.270 [2024-11-20 08:47:21.927525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.270 [2024-11-20 08:47:21.927536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:51.270 [2024-11-20 08:47:21.927550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.270 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.271 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.271 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.271 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.271 "name": "Existed_Raid", 00:13:51.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.271 "strip_size_kb": 0, 00:13:51.271 "state": "configuring", 00:13:51.271 "raid_level": "raid1", 00:13:51.271 "superblock": false, 00:13:51.271 "num_base_bdevs": 4, 00:13:51.271 "num_base_bdevs_discovered": 1, 00:13:51.271 "num_base_bdevs_operational": 4, 00:13:51.271 "base_bdevs_list": [ 00:13:51.271 { 00:13:51.271 "name": "BaseBdev1", 00:13:51.271 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:51.271 "is_configured": true, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 65536 00:13:51.271 }, 00:13:51.271 { 00:13:51.271 "name": "BaseBdev2", 00:13:51.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.271 "is_configured": false, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 0 00:13:51.271 }, 00:13:51.271 { 00:13:51.271 "name": "BaseBdev3", 00:13:51.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.271 "is_configured": false, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 0 00:13:51.271 }, 00:13:51.271 { 00:13:51.271 "name": "BaseBdev4", 00:13:51.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.271 "is_configured": false, 00:13:51.271 "data_offset": 0, 00:13:51.271 "data_size": 0 00:13:51.271 } 00:13:51.271 ] 00:13:51.271 }' 00:13:51.271 08:47:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.271 08:47:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.839 [2024-11-20 08:47:22.491048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.839 BaseBdev2 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.839 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.839 [ 00:13:51.839 { 00:13:51.839 "name": "BaseBdev2", 00:13:51.839 "aliases": [ 00:13:51.839 "2bcef6ff-653d-4f59-865f-5be42173a9e0" 00:13:51.839 ], 00:13:51.839 "product_name": "Malloc disk", 00:13:51.839 "block_size": 512, 00:13:51.839 "num_blocks": 65536, 00:13:51.839 "uuid": "2bcef6ff-653d-4f59-865f-5be42173a9e0", 00:13:51.839 "assigned_rate_limits": { 00:13:51.839 "rw_ios_per_sec": 0, 00:13:51.839 "rw_mbytes_per_sec": 0, 00:13:51.839 "r_mbytes_per_sec": 0, 00:13:51.839 "w_mbytes_per_sec": 0 00:13:51.839 }, 00:13:51.839 "claimed": true, 00:13:51.839 "claim_type": "exclusive_write", 00:13:51.839 "zoned": false, 00:13:51.839 "supported_io_types": { 00:13:51.839 "read": true, 00:13:51.839 "write": true, 00:13:51.839 "unmap": true, 00:13:51.839 "flush": true, 00:13:51.839 "reset": true, 00:13:51.839 "nvme_admin": false, 00:13:51.839 "nvme_io": false, 00:13:51.839 "nvme_io_md": false, 00:13:51.839 "write_zeroes": true, 00:13:51.839 "zcopy": true, 00:13:51.839 "get_zone_info": false, 00:13:51.839 "zone_management": false, 00:13:51.839 "zone_append": false, 00:13:51.839 "compare": false, 00:13:51.839 "compare_and_write": false, 00:13:51.839 "abort": true, 00:13:51.839 "seek_hole": false, 00:13:51.839 "seek_data": false, 00:13:51.840 "copy": true, 00:13:51.840 "nvme_iov_md": false 00:13:51.840 }, 00:13:51.840 "memory_domains": [ 00:13:51.840 { 00:13:51.840 "dma_device_id": "system", 00:13:51.840 "dma_device_type": 1 00:13:51.840 }, 00:13:51.840 { 00:13:51.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.840 "dma_device_type": 2 00:13:51.840 } 00:13:51.840 ], 00:13:51.840 "driver_specific": {} 00:13:51.840 } 00:13:51.840 ] 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.840 "name": "Existed_Raid", 00:13:51.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.840 "strip_size_kb": 0, 00:13:51.840 "state": "configuring", 00:13:51.840 "raid_level": "raid1", 00:13:51.840 "superblock": false, 00:13:51.840 "num_base_bdevs": 4, 00:13:51.840 "num_base_bdevs_discovered": 2, 00:13:51.840 "num_base_bdevs_operational": 4, 00:13:51.840 "base_bdevs_list": [ 00:13:51.840 { 00:13:51.840 "name": "BaseBdev1", 00:13:51.840 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:51.840 "is_configured": true, 00:13:51.840 "data_offset": 0, 00:13:51.840 "data_size": 65536 00:13:51.840 }, 00:13:51.840 { 00:13:51.840 "name": "BaseBdev2", 00:13:51.840 "uuid": "2bcef6ff-653d-4f59-865f-5be42173a9e0", 00:13:51.840 "is_configured": true, 00:13:51.840 "data_offset": 0, 00:13:51.840 "data_size": 65536 00:13:51.840 }, 00:13:51.840 { 00:13:51.840 "name": "BaseBdev3", 00:13:51.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.840 "is_configured": false, 00:13:51.840 "data_offset": 0, 00:13:51.840 "data_size": 0 00:13:51.840 }, 00:13:51.840 { 00:13:51.840 "name": "BaseBdev4", 00:13:51.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.840 "is_configured": false, 00:13:51.840 "data_offset": 0, 00:13:51.840 "data_size": 0 00:13:51.840 } 00:13:51.840 ] 00:13:51.840 }' 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.840 08:47:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.407 [2024-11-20 08:47:23.079498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.407 BaseBdev3 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.407 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.407 [ 00:13:52.407 { 00:13:52.407 "name": "BaseBdev3", 00:13:52.407 "aliases": [ 00:13:52.407 "2f9e5fcd-bbfb-4db4-8d53-e78a156127b7" 00:13:52.408 ], 00:13:52.408 "product_name": "Malloc disk", 00:13:52.408 "block_size": 512, 00:13:52.408 "num_blocks": 65536, 00:13:52.408 "uuid": "2f9e5fcd-bbfb-4db4-8d53-e78a156127b7", 00:13:52.408 "assigned_rate_limits": { 00:13:52.408 "rw_ios_per_sec": 0, 00:13:52.408 "rw_mbytes_per_sec": 0, 00:13:52.408 "r_mbytes_per_sec": 0, 00:13:52.408 "w_mbytes_per_sec": 0 00:13:52.408 }, 00:13:52.408 "claimed": true, 00:13:52.408 "claim_type": "exclusive_write", 00:13:52.408 "zoned": false, 00:13:52.408 "supported_io_types": { 00:13:52.408 "read": true, 00:13:52.408 "write": true, 00:13:52.408 "unmap": true, 00:13:52.408 "flush": true, 00:13:52.408 "reset": true, 00:13:52.408 "nvme_admin": false, 00:13:52.408 "nvme_io": false, 00:13:52.408 "nvme_io_md": false, 00:13:52.408 "write_zeroes": true, 00:13:52.408 "zcopy": true, 00:13:52.408 "get_zone_info": false, 00:13:52.408 "zone_management": false, 00:13:52.408 "zone_append": false, 00:13:52.408 "compare": false, 00:13:52.408 "compare_and_write": false, 00:13:52.408 "abort": true, 00:13:52.408 "seek_hole": false, 00:13:52.408 "seek_data": false, 00:13:52.408 "copy": true, 00:13:52.408 "nvme_iov_md": false 00:13:52.408 }, 00:13:52.408 "memory_domains": [ 00:13:52.408 { 00:13:52.408 "dma_device_id": "system", 00:13:52.408 "dma_device_type": 1 00:13:52.408 }, 00:13:52.408 { 00:13:52.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.408 "dma_device_type": 2 00:13:52.408 } 00:13:52.408 ], 00:13:52.408 "driver_specific": {} 00:13:52.408 } 00:13:52.408 ] 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.408 "name": "Existed_Raid", 00:13:52.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.408 "strip_size_kb": 0, 00:13:52.408 "state": "configuring", 00:13:52.408 "raid_level": "raid1", 00:13:52.408 "superblock": false, 00:13:52.408 "num_base_bdevs": 4, 00:13:52.408 "num_base_bdevs_discovered": 3, 00:13:52.408 "num_base_bdevs_operational": 4, 00:13:52.408 "base_bdevs_list": [ 00:13:52.408 { 00:13:52.408 "name": "BaseBdev1", 00:13:52.408 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:52.408 "is_configured": true, 00:13:52.408 "data_offset": 0, 00:13:52.408 "data_size": 65536 00:13:52.408 }, 00:13:52.408 { 00:13:52.408 "name": "BaseBdev2", 00:13:52.408 "uuid": "2bcef6ff-653d-4f59-865f-5be42173a9e0", 00:13:52.408 "is_configured": true, 00:13:52.408 "data_offset": 0, 00:13:52.408 "data_size": 65536 00:13:52.408 }, 00:13:52.408 { 00:13:52.408 "name": "BaseBdev3", 00:13:52.408 "uuid": "2f9e5fcd-bbfb-4db4-8d53-e78a156127b7", 00:13:52.408 "is_configured": true, 00:13:52.408 "data_offset": 0, 00:13:52.408 "data_size": 65536 00:13:52.408 }, 00:13:52.408 { 00:13:52.408 "name": "BaseBdev4", 00:13:52.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.408 "is_configured": false, 00:13:52.408 "data_offset": 0, 00:13:52.408 "data_size": 0 00:13:52.408 } 00:13:52.408 ] 00:13:52.408 }' 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.408 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.977 [2024-11-20 08:47:23.657943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.977 [2024-11-20 08:47:23.658011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:52.977 [2024-11-20 08:47:23.658025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:52.977 [2024-11-20 08:47:23.658413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.977 [2024-11-20 08:47:23.658644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:52.977 [2024-11-20 08:47:23.658664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:52.977 [2024-11-20 08:47:23.659016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.977 BaseBdev4 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.977 [ 00:13:52.977 { 00:13:52.977 "name": "BaseBdev4", 00:13:52.977 "aliases": [ 00:13:52.977 "905dc92c-3f7a-45c5-8a9e-556bbfb384c2" 00:13:52.977 ], 00:13:52.977 "product_name": "Malloc disk", 00:13:52.977 "block_size": 512, 00:13:52.977 "num_blocks": 65536, 00:13:52.977 "uuid": "905dc92c-3f7a-45c5-8a9e-556bbfb384c2", 00:13:52.977 "assigned_rate_limits": { 00:13:52.977 "rw_ios_per_sec": 0, 00:13:52.977 "rw_mbytes_per_sec": 0, 00:13:52.977 "r_mbytes_per_sec": 0, 00:13:52.977 "w_mbytes_per_sec": 0 00:13:52.977 }, 00:13:52.977 "claimed": true, 00:13:52.977 "claim_type": "exclusive_write", 00:13:52.977 "zoned": false, 00:13:52.977 "supported_io_types": { 00:13:52.977 "read": true, 00:13:52.977 "write": true, 00:13:52.977 "unmap": true, 00:13:52.977 "flush": true, 00:13:52.977 "reset": true, 00:13:52.977 "nvme_admin": false, 00:13:52.977 "nvme_io": false, 00:13:52.977 "nvme_io_md": false, 00:13:52.977 "write_zeroes": true, 00:13:52.977 "zcopy": true, 00:13:52.977 "get_zone_info": false, 00:13:52.977 "zone_management": false, 00:13:52.977 "zone_append": false, 00:13:52.977 "compare": false, 00:13:52.977 "compare_and_write": false, 00:13:52.977 "abort": true, 00:13:52.977 "seek_hole": false, 00:13:52.977 "seek_data": false, 00:13:52.977 "copy": true, 00:13:52.977 "nvme_iov_md": false 00:13:52.977 }, 00:13:52.977 "memory_domains": [ 00:13:52.977 { 00:13:52.977 "dma_device_id": "system", 00:13:52.977 "dma_device_type": 1 00:13:52.977 }, 00:13:52.977 { 00:13:52.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.977 "dma_device_type": 2 00:13:52.977 } 00:13:52.977 ], 00:13:52.977 "driver_specific": {} 00:13:52.977 } 00:13:52.977 ] 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.977 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.977 "name": "Existed_Raid", 00:13:52.977 "uuid": "c4817b6c-b412-4707-8ce6-30c29baf6b42", 00:13:52.977 "strip_size_kb": 0, 00:13:52.977 "state": "online", 00:13:52.977 "raid_level": "raid1", 00:13:52.977 "superblock": false, 00:13:52.977 "num_base_bdevs": 4, 00:13:52.977 "num_base_bdevs_discovered": 4, 00:13:52.977 "num_base_bdevs_operational": 4, 00:13:52.977 "base_bdevs_list": [ 00:13:52.977 { 00:13:52.977 "name": "BaseBdev1", 00:13:52.977 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:52.977 "is_configured": true, 00:13:52.977 "data_offset": 0, 00:13:52.977 "data_size": 65536 00:13:52.977 }, 00:13:52.977 { 00:13:52.977 "name": "BaseBdev2", 00:13:52.978 "uuid": "2bcef6ff-653d-4f59-865f-5be42173a9e0", 00:13:52.978 "is_configured": true, 00:13:52.978 "data_offset": 0, 00:13:52.978 "data_size": 65536 00:13:52.978 }, 00:13:52.978 { 00:13:52.978 "name": "BaseBdev3", 00:13:52.978 "uuid": "2f9e5fcd-bbfb-4db4-8d53-e78a156127b7", 00:13:52.978 "is_configured": true, 00:13:52.978 "data_offset": 0, 00:13:52.978 "data_size": 65536 00:13:52.978 }, 00:13:52.978 { 00:13:52.978 "name": "BaseBdev4", 00:13:52.978 "uuid": "905dc92c-3f7a-45c5-8a9e-556bbfb384c2", 00:13:52.978 "is_configured": true, 00:13:52.978 "data_offset": 0, 00:13:52.978 "data_size": 65536 00:13:52.978 } 00:13:52.978 ] 00:13:52.978 }' 00:13:52.978 08:47:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.978 08:47:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.546 [2024-11-20 08:47:24.210648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.546 "name": "Existed_Raid", 00:13:53.546 "aliases": [ 00:13:53.546 "c4817b6c-b412-4707-8ce6-30c29baf6b42" 00:13:53.546 ], 00:13:53.546 "product_name": "Raid Volume", 00:13:53.546 "block_size": 512, 00:13:53.546 "num_blocks": 65536, 00:13:53.546 "uuid": "c4817b6c-b412-4707-8ce6-30c29baf6b42", 00:13:53.546 "assigned_rate_limits": { 00:13:53.546 "rw_ios_per_sec": 0, 00:13:53.546 "rw_mbytes_per_sec": 0, 00:13:53.546 "r_mbytes_per_sec": 0, 00:13:53.546 "w_mbytes_per_sec": 0 00:13:53.546 }, 00:13:53.546 "claimed": false, 00:13:53.546 "zoned": false, 00:13:53.546 "supported_io_types": { 00:13:53.546 "read": true, 00:13:53.546 "write": true, 00:13:53.546 "unmap": false, 00:13:53.546 "flush": false, 00:13:53.546 "reset": true, 00:13:53.546 "nvme_admin": false, 00:13:53.546 "nvme_io": false, 00:13:53.546 "nvme_io_md": false, 00:13:53.546 "write_zeroes": true, 00:13:53.546 "zcopy": false, 00:13:53.546 "get_zone_info": false, 00:13:53.546 "zone_management": false, 00:13:53.546 "zone_append": false, 00:13:53.546 "compare": false, 00:13:53.546 "compare_and_write": false, 00:13:53.546 "abort": false, 00:13:53.546 "seek_hole": false, 00:13:53.546 "seek_data": false, 00:13:53.546 "copy": false, 00:13:53.546 "nvme_iov_md": false 00:13:53.546 }, 00:13:53.546 "memory_domains": [ 00:13:53.546 { 00:13:53.546 "dma_device_id": "system", 00:13:53.546 "dma_device_type": 1 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.546 "dma_device_type": 2 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "system", 00:13:53.546 "dma_device_type": 1 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.546 "dma_device_type": 2 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "system", 00:13:53.546 "dma_device_type": 1 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.546 "dma_device_type": 2 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "system", 00:13:53.546 "dma_device_type": 1 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.546 "dma_device_type": 2 00:13:53.546 } 00:13:53.546 ], 00:13:53.546 "driver_specific": { 00:13:53.546 "raid": { 00:13:53.546 "uuid": "c4817b6c-b412-4707-8ce6-30c29baf6b42", 00:13:53.546 "strip_size_kb": 0, 00:13:53.546 "state": "online", 00:13:53.546 "raid_level": "raid1", 00:13:53.546 "superblock": false, 00:13:53.546 "num_base_bdevs": 4, 00:13:53.546 "num_base_bdevs_discovered": 4, 00:13:53.546 "num_base_bdevs_operational": 4, 00:13:53.546 "base_bdevs_list": [ 00:13:53.546 { 00:13:53.546 "name": "BaseBdev1", 00:13:53.546 "uuid": "68807dc5-07a4-4044-8aeb-fcac1068f5cb", 00:13:53.546 "is_configured": true, 00:13:53.546 "data_offset": 0, 00:13:53.546 "data_size": 65536 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "name": "BaseBdev2", 00:13:53.546 "uuid": "2bcef6ff-653d-4f59-865f-5be42173a9e0", 00:13:53.546 "is_configured": true, 00:13:53.546 "data_offset": 0, 00:13:53.546 "data_size": 65536 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "name": "BaseBdev3", 00:13:53.546 "uuid": "2f9e5fcd-bbfb-4db4-8d53-e78a156127b7", 00:13:53.546 "is_configured": true, 00:13:53.546 "data_offset": 0, 00:13:53.546 "data_size": 65536 00:13:53.546 }, 00:13:53.546 { 00:13:53.546 "name": "BaseBdev4", 00:13:53.546 "uuid": "905dc92c-3f7a-45c5-8a9e-556bbfb384c2", 00:13:53.546 "is_configured": true, 00:13:53.546 "data_offset": 0, 00:13:53.546 "data_size": 65536 00:13:53.546 } 00:13:53.546 ] 00:13:53.546 } 00:13:53.546 } 00:13:53.546 }' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:53.546 BaseBdev2 00:13:53.546 BaseBdev3 00:13:53.546 BaseBdev4' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.546 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.806 [2024-11-20 08:47:24.586333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.806 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.065 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.065 "name": "Existed_Raid", 00:13:54.065 "uuid": "c4817b6c-b412-4707-8ce6-30c29baf6b42", 00:13:54.065 "strip_size_kb": 0, 00:13:54.065 "state": "online", 00:13:54.065 "raid_level": "raid1", 00:13:54.065 "superblock": false, 00:13:54.065 "num_base_bdevs": 4, 00:13:54.065 "num_base_bdevs_discovered": 3, 00:13:54.065 "num_base_bdevs_operational": 3, 00:13:54.065 "base_bdevs_list": [ 00:13:54.065 { 00:13:54.065 "name": null, 00:13:54.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.065 "is_configured": false, 00:13:54.065 "data_offset": 0, 00:13:54.065 "data_size": 65536 00:13:54.065 }, 00:13:54.065 { 00:13:54.065 "name": "BaseBdev2", 00:13:54.065 "uuid": "2bcef6ff-653d-4f59-865f-5be42173a9e0", 00:13:54.065 "is_configured": true, 00:13:54.065 "data_offset": 0, 00:13:54.065 "data_size": 65536 00:13:54.065 }, 00:13:54.065 { 00:13:54.065 "name": "BaseBdev3", 00:13:54.065 "uuid": "2f9e5fcd-bbfb-4db4-8d53-e78a156127b7", 00:13:54.065 "is_configured": true, 00:13:54.065 "data_offset": 0, 00:13:54.065 "data_size": 65536 00:13:54.065 }, 00:13:54.065 { 00:13:54.065 "name": "BaseBdev4", 00:13:54.065 "uuid": "905dc92c-3f7a-45c5-8a9e-556bbfb384c2", 00:13:54.065 "is_configured": true, 00:13:54.065 "data_offset": 0, 00:13:54.065 "data_size": 65536 00:13:54.065 } 00:13:54.065 ] 00:13:54.065 }' 00:13:54.065 08:47:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.065 08:47:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.324 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.583 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.583 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.583 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:54.583 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.583 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.583 [2024-11-20 08:47:25.244760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.584 [2024-11-20 08:47:25.387352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.584 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.843 [2024-11-20 08:47:25.536336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:54.843 [2024-11-20 08:47:25.536456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.843 [2024-11-20 08:47:25.623482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.843 [2024-11-20 08:47:25.623549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.843 [2024-11-20 08:47:25.623569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.843 BaseBdev2 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.843 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.843 [ 00:13:54.843 { 00:13:54.843 "name": "BaseBdev2", 00:13:54.843 "aliases": [ 00:13:54.843 "9d081984-7611-49ad-8b6c-4a259e0ab9ed" 00:13:54.843 ], 00:13:54.843 "product_name": "Malloc disk", 00:13:54.843 "block_size": 512, 00:13:54.843 "num_blocks": 65536, 00:13:54.843 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:54.843 "assigned_rate_limits": { 00:13:54.843 "rw_ios_per_sec": 0, 00:13:54.843 "rw_mbytes_per_sec": 0, 00:13:54.843 "r_mbytes_per_sec": 0, 00:13:54.843 "w_mbytes_per_sec": 0 00:13:54.843 }, 00:13:54.843 "claimed": false, 00:13:54.843 "zoned": false, 00:13:54.843 "supported_io_types": { 00:13:54.843 "read": true, 00:13:54.843 "write": true, 00:13:54.843 "unmap": true, 00:13:54.843 "flush": true, 00:13:54.843 "reset": true, 00:13:54.843 "nvme_admin": false, 00:13:54.843 "nvme_io": false, 00:13:54.843 "nvme_io_md": false, 00:13:54.843 "write_zeroes": true, 00:13:54.843 "zcopy": true, 00:13:54.843 "get_zone_info": false, 00:13:54.843 "zone_management": false, 00:13:54.843 "zone_append": false, 00:13:54.843 "compare": false, 00:13:54.844 "compare_and_write": false, 00:13:54.844 "abort": true, 00:13:54.844 "seek_hole": false, 00:13:54.844 "seek_data": false, 00:13:54.844 "copy": true, 00:13:54.844 "nvme_iov_md": false 00:13:54.844 }, 00:13:54.844 "memory_domains": [ 00:13:54.844 { 00:13:54.844 "dma_device_id": "system", 00:13:54.844 "dma_device_type": 1 00:13:54.844 }, 00:13:54.844 { 00:13:54.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.844 "dma_device_type": 2 00:13:54.844 } 00:13:54.844 ], 00:13:54.844 "driver_specific": {} 00:13:54.844 } 00:13:54.844 ] 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.844 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 BaseBdev3 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 [ 00:13:55.103 { 00:13:55.103 "name": "BaseBdev3", 00:13:55.103 "aliases": [ 00:13:55.103 "46ecde56-5c11-4980-a8fc-482731718cdc" 00:13:55.103 ], 00:13:55.103 "product_name": "Malloc disk", 00:13:55.103 "block_size": 512, 00:13:55.103 "num_blocks": 65536, 00:13:55.103 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:55.103 "assigned_rate_limits": { 00:13:55.103 "rw_ios_per_sec": 0, 00:13:55.103 "rw_mbytes_per_sec": 0, 00:13:55.103 "r_mbytes_per_sec": 0, 00:13:55.103 "w_mbytes_per_sec": 0 00:13:55.103 }, 00:13:55.103 "claimed": false, 00:13:55.103 "zoned": false, 00:13:55.103 "supported_io_types": { 00:13:55.103 "read": true, 00:13:55.103 "write": true, 00:13:55.103 "unmap": true, 00:13:55.103 "flush": true, 00:13:55.103 "reset": true, 00:13:55.103 "nvme_admin": false, 00:13:55.103 "nvme_io": false, 00:13:55.103 "nvme_io_md": false, 00:13:55.103 "write_zeroes": true, 00:13:55.103 "zcopy": true, 00:13:55.103 "get_zone_info": false, 00:13:55.103 "zone_management": false, 00:13:55.103 "zone_append": false, 00:13:55.103 "compare": false, 00:13:55.103 "compare_and_write": false, 00:13:55.103 "abort": true, 00:13:55.103 "seek_hole": false, 00:13:55.103 "seek_data": false, 00:13:55.103 "copy": true, 00:13:55.103 "nvme_iov_md": false 00:13:55.103 }, 00:13:55.103 "memory_domains": [ 00:13:55.103 { 00:13:55.103 "dma_device_id": "system", 00:13:55.103 "dma_device_type": 1 00:13:55.103 }, 00:13:55.103 { 00:13:55.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.103 "dma_device_type": 2 00:13:55.103 } 00:13:55.103 ], 00:13:55.103 "driver_specific": {} 00:13:55.103 } 00:13:55.103 ] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 BaseBdev4 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.103 [ 00:13:55.103 { 00:13:55.103 "name": "BaseBdev4", 00:13:55.103 "aliases": [ 00:13:55.103 "315335e3-7bf6-453b-910a-c055f0dff453" 00:13:55.103 ], 00:13:55.103 "product_name": "Malloc disk", 00:13:55.103 "block_size": 512, 00:13:55.103 "num_blocks": 65536, 00:13:55.103 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:55.103 "assigned_rate_limits": { 00:13:55.103 "rw_ios_per_sec": 0, 00:13:55.103 "rw_mbytes_per_sec": 0, 00:13:55.103 "r_mbytes_per_sec": 0, 00:13:55.103 "w_mbytes_per_sec": 0 00:13:55.103 }, 00:13:55.103 "claimed": false, 00:13:55.103 "zoned": false, 00:13:55.103 "supported_io_types": { 00:13:55.103 "read": true, 00:13:55.103 "write": true, 00:13:55.103 "unmap": true, 00:13:55.103 "flush": true, 00:13:55.103 "reset": true, 00:13:55.103 "nvme_admin": false, 00:13:55.103 "nvme_io": false, 00:13:55.103 "nvme_io_md": false, 00:13:55.103 "write_zeroes": true, 00:13:55.103 "zcopy": true, 00:13:55.103 "get_zone_info": false, 00:13:55.103 "zone_management": false, 00:13:55.103 "zone_append": false, 00:13:55.103 "compare": false, 00:13:55.103 "compare_and_write": false, 00:13:55.103 "abort": true, 00:13:55.103 "seek_hole": false, 00:13:55.103 "seek_data": false, 00:13:55.103 "copy": true, 00:13:55.103 "nvme_iov_md": false 00:13:55.103 }, 00:13:55.103 "memory_domains": [ 00:13:55.103 { 00:13:55.103 "dma_device_id": "system", 00:13:55.103 "dma_device_type": 1 00:13:55.103 }, 00:13:55.103 { 00:13:55.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.103 "dma_device_type": 2 00:13:55.103 } 00:13:55.103 ], 00:13:55.103 "driver_specific": {} 00:13:55.103 } 00:13:55.103 ] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.103 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.104 [2024-11-20 08:47:25.914087] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.104 [2024-11-20 08:47:25.914164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.104 [2024-11-20 08:47:25.914207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.104 [2024-11-20 08:47:25.916600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.104 [2024-11-20 08:47:25.916844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.104 "name": "Existed_Raid", 00:13:55.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.104 "strip_size_kb": 0, 00:13:55.104 "state": "configuring", 00:13:55.104 "raid_level": "raid1", 00:13:55.104 "superblock": false, 00:13:55.104 "num_base_bdevs": 4, 00:13:55.104 "num_base_bdevs_discovered": 3, 00:13:55.104 "num_base_bdevs_operational": 4, 00:13:55.104 "base_bdevs_list": [ 00:13:55.104 { 00:13:55.104 "name": "BaseBdev1", 00:13:55.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.104 "is_configured": false, 00:13:55.104 "data_offset": 0, 00:13:55.104 "data_size": 0 00:13:55.104 }, 00:13:55.104 { 00:13:55.104 "name": "BaseBdev2", 00:13:55.104 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:55.104 "is_configured": true, 00:13:55.104 "data_offset": 0, 00:13:55.104 "data_size": 65536 00:13:55.104 }, 00:13:55.104 { 00:13:55.104 "name": "BaseBdev3", 00:13:55.104 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:55.104 "is_configured": true, 00:13:55.104 "data_offset": 0, 00:13:55.104 "data_size": 65536 00:13:55.104 }, 00:13:55.104 { 00:13:55.104 "name": "BaseBdev4", 00:13:55.104 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:55.104 "is_configured": true, 00:13:55.104 "data_offset": 0, 00:13:55.104 "data_size": 65536 00:13:55.104 } 00:13:55.104 ] 00:13:55.104 }' 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.104 08:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.671 [2024-11-20 08:47:26.438299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.671 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.671 "name": "Existed_Raid", 00:13:55.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.671 "strip_size_kb": 0, 00:13:55.671 "state": "configuring", 00:13:55.671 "raid_level": "raid1", 00:13:55.671 "superblock": false, 00:13:55.671 "num_base_bdevs": 4, 00:13:55.671 "num_base_bdevs_discovered": 2, 00:13:55.671 "num_base_bdevs_operational": 4, 00:13:55.671 "base_bdevs_list": [ 00:13:55.671 { 00:13:55.672 "name": "BaseBdev1", 00:13:55.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.672 "is_configured": false, 00:13:55.672 "data_offset": 0, 00:13:55.672 "data_size": 0 00:13:55.672 }, 00:13:55.672 { 00:13:55.672 "name": null, 00:13:55.672 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:55.672 "is_configured": false, 00:13:55.672 "data_offset": 0, 00:13:55.672 "data_size": 65536 00:13:55.672 }, 00:13:55.672 { 00:13:55.672 "name": "BaseBdev3", 00:13:55.672 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:55.672 "is_configured": true, 00:13:55.672 "data_offset": 0, 00:13:55.672 "data_size": 65536 00:13:55.672 }, 00:13:55.672 { 00:13:55.672 "name": "BaseBdev4", 00:13:55.672 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:55.672 "is_configured": true, 00:13:55.672 "data_offset": 0, 00:13:55.672 "data_size": 65536 00:13:55.672 } 00:13:55.672 ] 00:13:55.672 }' 00:13:55.672 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.672 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.240 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.240 08:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.240 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.240 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.240 08:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.240 [2024-11-20 08:47:27.040939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.240 BaseBdev1 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.240 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.240 [ 00:13:56.240 { 00:13:56.240 "name": "BaseBdev1", 00:13:56.240 "aliases": [ 00:13:56.240 "d6272edb-580c-4f46-9b3a-2321a0e2ad01" 00:13:56.240 ], 00:13:56.240 "product_name": "Malloc disk", 00:13:56.240 "block_size": 512, 00:13:56.240 "num_blocks": 65536, 00:13:56.240 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:56.240 "assigned_rate_limits": { 00:13:56.240 "rw_ios_per_sec": 0, 00:13:56.240 "rw_mbytes_per_sec": 0, 00:13:56.240 "r_mbytes_per_sec": 0, 00:13:56.240 "w_mbytes_per_sec": 0 00:13:56.240 }, 00:13:56.240 "claimed": true, 00:13:56.240 "claim_type": "exclusive_write", 00:13:56.241 "zoned": false, 00:13:56.241 "supported_io_types": { 00:13:56.241 "read": true, 00:13:56.241 "write": true, 00:13:56.241 "unmap": true, 00:13:56.241 "flush": true, 00:13:56.241 "reset": true, 00:13:56.241 "nvme_admin": false, 00:13:56.241 "nvme_io": false, 00:13:56.241 "nvme_io_md": false, 00:13:56.241 "write_zeroes": true, 00:13:56.241 "zcopy": true, 00:13:56.241 "get_zone_info": false, 00:13:56.241 "zone_management": false, 00:13:56.241 "zone_append": false, 00:13:56.241 "compare": false, 00:13:56.241 "compare_and_write": false, 00:13:56.241 "abort": true, 00:13:56.241 "seek_hole": false, 00:13:56.241 "seek_data": false, 00:13:56.241 "copy": true, 00:13:56.241 "nvme_iov_md": false 00:13:56.241 }, 00:13:56.241 "memory_domains": [ 00:13:56.241 { 00:13:56.241 "dma_device_id": "system", 00:13:56.241 "dma_device_type": 1 00:13:56.241 }, 00:13:56.241 { 00:13:56.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.241 "dma_device_type": 2 00:13:56.241 } 00:13:56.241 ], 00:13:56.241 "driver_specific": {} 00:13:56.241 } 00:13:56.241 ] 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.241 "name": "Existed_Raid", 00:13:56.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.241 "strip_size_kb": 0, 00:13:56.241 "state": "configuring", 00:13:56.241 "raid_level": "raid1", 00:13:56.241 "superblock": false, 00:13:56.241 "num_base_bdevs": 4, 00:13:56.241 "num_base_bdevs_discovered": 3, 00:13:56.241 "num_base_bdevs_operational": 4, 00:13:56.241 "base_bdevs_list": [ 00:13:56.241 { 00:13:56.241 "name": "BaseBdev1", 00:13:56.241 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:56.241 "is_configured": true, 00:13:56.241 "data_offset": 0, 00:13:56.241 "data_size": 65536 00:13:56.241 }, 00:13:56.241 { 00:13:56.241 "name": null, 00:13:56.241 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:56.241 "is_configured": false, 00:13:56.241 "data_offset": 0, 00:13:56.241 "data_size": 65536 00:13:56.241 }, 00:13:56.241 { 00:13:56.241 "name": "BaseBdev3", 00:13:56.241 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:56.241 "is_configured": true, 00:13:56.241 "data_offset": 0, 00:13:56.241 "data_size": 65536 00:13:56.241 }, 00:13:56.241 { 00:13:56.241 "name": "BaseBdev4", 00:13:56.241 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:56.241 "is_configured": true, 00:13:56.241 "data_offset": 0, 00:13:56.241 "data_size": 65536 00:13:56.241 } 00:13:56.241 ] 00:13:56.241 }' 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.241 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.809 [2024-11-20 08:47:27.653202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.809 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.810 "name": "Existed_Raid", 00:13:56.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.810 "strip_size_kb": 0, 00:13:56.810 "state": "configuring", 00:13:56.810 "raid_level": "raid1", 00:13:56.810 "superblock": false, 00:13:56.810 "num_base_bdevs": 4, 00:13:56.810 "num_base_bdevs_discovered": 2, 00:13:56.810 "num_base_bdevs_operational": 4, 00:13:56.810 "base_bdevs_list": [ 00:13:56.810 { 00:13:56.810 "name": "BaseBdev1", 00:13:56.810 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:56.810 "is_configured": true, 00:13:56.810 "data_offset": 0, 00:13:56.810 "data_size": 65536 00:13:56.810 }, 00:13:56.810 { 00:13:56.810 "name": null, 00:13:56.810 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:56.810 "is_configured": false, 00:13:56.810 "data_offset": 0, 00:13:56.810 "data_size": 65536 00:13:56.810 }, 00:13:56.810 { 00:13:56.810 "name": null, 00:13:56.810 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:56.810 "is_configured": false, 00:13:56.810 "data_offset": 0, 00:13:56.810 "data_size": 65536 00:13:56.810 }, 00:13:56.810 { 00:13:56.810 "name": "BaseBdev4", 00:13:56.810 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:56.810 "is_configured": true, 00:13:56.810 "data_offset": 0, 00:13:56.810 "data_size": 65536 00:13:56.810 } 00:13:56.810 ] 00:13:56.810 }' 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.810 08:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 [2024-11-20 08:47:28.233418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.378 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.637 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.637 "name": "Existed_Raid", 00:13:57.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.637 "strip_size_kb": 0, 00:13:57.637 "state": "configuring", 00:13:57.637 "raid_level": "raid1", 00:13:57.637 "superblock": false, 00:13:57.637 "num_base_bdevs": 4, 00:13:57.637 "num_base_bdevs_discovered": 3, 00:13:57.637 "num_base_bdevs_operational": 4, 00:13:57.637 "base_bdevs_list": [ 00:13:57.637 { 00:13:57.637 "name": "BaseBdev1", 00:13:57.637 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:57.637 "is_configured": true, 00:13:57.637 "data_offset": 0, 00:13:57.637 "data_size": 65536 00:13:57.637 }, 00:13:57.637 { 00:13:57.637 "name": null, 00:13:57.637 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:57.637 "is_configured": false, 00:13:57.637 "data_offset": 0, 00:13:57.637 "data_size": 65536 00:13:57.637 }, 00:13:57.637 { 00:13:57.637 "name": "BaseBdev3", 00:13:57.637 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:57.637 "is_configured": true, 00:13:57.637 "data_offset": 0, 00:13:57.637 "data_size": 65536 00:13:57.637 }, 00:13:57.637 { 00:13:57.637 "name": "BaseBdev4", 00:13:57.637 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:57.637 "is_configured": true, 00:13:57.637 "data_offset": 0, 00:13:57.637 "data_size": 65536 00:13:57.637 } 00:13:57.637 ] 00:13:57.637 }' 00:13:57.637 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.637 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.896 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.155 [2024-11-20 08:47:28.813599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.155 "name": "Existed_Raid", 00:13:58.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.155 "strip_size_kb": 0, 00:13:58.155 "state": "configuring", 00:13:58.155 "raid_level": "raid1", 00:13:58.155 "superblock": false, 00:13:58.155 "num_base_bdevs": 4, 00:13:58.155 "num_base_bdevs_discovered": 2, 00:13:58.155 "num_base_bdevs_operational": 4, 00:13:58.155 "base_bdevs_list": [ 00:13:58.155 { 00:13:58.155 "name": null, 00:13:58.155 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:58.155 "is_configured": false, 00:13:58.155 "data_offset": 0, 00:13:58.155 "data_size": 65536 00:13:58.155 }, 00:13:58.155 { 00:13:58.155 "name": null, 00:13:58.155 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:58.155 "is_configured": false, 00:13:58.155 "data_offset": 0, 00:13:58.155 "data_size": 65536 00:13:58.155 }, 00:13:58.155 { 00:13:58.155 "name": "BaseBdev3", 00:13:58.155 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:58.155 "is_configured": true, 00:13:58.155 "data_offset": 0, 00:13:58.155 "data_size": 65536 00:13:58.155 }, 00:13:58.155 { 00:13:58.155 "name": "BaseBdev4", 00:13:58.155 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:58.155 "is_configured": true, 00:13:58.155 "data_offset": 0, 00:13:58.155 "data_size": 65536 00:13:58.155 } 00:13:58.155 ] 00:13:58.155 }' 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.155 08:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.722 [2024-11-20 08:47:29.452797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.722 "name": "Existed_Raid", 00:13:58.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.722 "strip_size_kb": 0, 00:13:58.722 "state": "configuring", 00:13:58.722 "raid_level": "raid1", 00:13:58.722 "superblock": false, 00:13:58.722 "num_base_bdevs": 4, 00:13:58.722 "num_base_bdevs_discovered": 3, 00:13:58.722 "num_base_bdevs_operational": 4, 00:13:58.722 "base_bdevs_list": [ 00:13:58.722 { 00:13:58.722 "name": null, 00:13:58.722 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:58.722 "is_configured": false, 00:13:58.722 "data_offset": 0, 00:13:58.722 "data_size": 65536 00:13:58.722 }, 00:13:58.722 { 00:13:58.722 "name": "BaseBdev2", 00:13:58.722 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:58.722 "is_configured": true, 00:13:58.722 "data_offset": 0, 00:13:58.722 "data_size": 65536 00:13:58.722 }, 00:13:58.722 { 00:13:58.722 "name": "BaseBdev3", 00:13:58.722 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:58.722 "is_configured": true, 00:13:58.722 "data_offset": 0, 00:13:58.722 "data_size": 65536 00:13:58.722 }, 00:13:58.722 { 00:13:58.722 "name": "BaseBdev4", 00:13:58.722 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:58.722 "is_configured": true, 00:13:58.722 "data_offset": 0, 00:13:58.722 "data_size": 65536 00:13:58.722 } 00:13:58.722 ] 00:13:58.722 }' 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.722 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.290 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.290 08:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.290 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 08:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6272edb-580c-4f46-9b3a-2321a0e2ad01 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 [2024-11-20 08:47:30.123409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:59.290 [2024-11-20 08:47:30.123460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:59.290 [2024-11-20 08:47:30.123476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:59.290 [2024-11-20 08:47:30.123832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:59.290 [2024-11-20 08:47:30.124068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:59.290 [2024-11-20 08:47:30.124086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:59.290 [2024-11-20 08:47:30.124404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.290 NewBaseBdev 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 [ 00:13:59.290 { 00:13:59.290 "name": "NewBaseBdev", 00:13:59.290 "aliases": [ 00:13:59.290 "d6272edb-580c-4f46-9b3a-2321a0e2ad01" 00:13:59.290 ], 00:13:59.290 "product_name": "Malloc disk", 00:13:59.290 "block_size": 512, 00:13:59.290 "num_blocks": 65536, 00:13:59.290 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:59.290 "assigned_rate_limits": { 00:13:59.290 "rw_ios_per_sec": 0, 00:13:59.290 "rw_mbytes_per_sec": 0, 00:13:59.290 "r_mbytes_per_sec": 0, 00:13:59.290 "w_mbytes_per_sec": 0 00:13:59.290 }, 00:13:59.290 "claimed": true, 00:13:59.290 "claim_type": "exclusive_write", 00:13:59.290 "zoned": false, 00:13:59.290 "supported_io_types": { 00:13:59.290 "read": true, 00:13:59.290 "write": true, 00:13:59.290 "unmap": true, 00:13:59.290 "flush": true, 00:13:59.290 "reset": true, 00:13:59.290 "nvme_admin": false, 00:13:59.290 "nvme_io": false, 00:13:59.290 "nvme_io_md": false, 00:13:59.290 "write_zeroes": true, 00:13:59.290 "zcopy": true, 00:13:59.290 "get_zone_info": false, 00:13:59.290 "zone_management": false, 00:13:59.290 "zone_append": false, 00:13:59.290 "compare": false, 00:13:59.290 "compare_and_write": false, 00:13:59.290 "abort": true, 00:13:59.290 "seek_hole": false, 00:13:59.290 "seek_data": false, 00:13:59.290 "copy": true, 00:13:59.290 "nvme_iov_md": false 00:13:59.290 }, 00:13:59.290 "memory_domains": [ 00:13:59.290 { 00:13:59.290 "dma_device_id": "system", 00:13:59.290 "dma_device_type": 1 00:13:59.290 }, 00:13:59.290 { 00:13:59.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.290 "dma_device_type": 2 00:13:59.290 } 00:13:59.290 ], 00:13:59.290 "driver_specific": {} 00:13:59.290 } 00:13:59.290 ] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.290 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.578 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.578 "name": "Existed_Raid", 00:13:59.578 "uuid": "6afdcf99-fcd4-444c-8acc-6e79cf966930", 00:13:59.578 "strip_size_kb": 0, 00:13:59.578 "state": "online", 00:13:59.578 "raid_level": "raid1", 00:13:59.578 "superblock": false, 00:13:59.578 "num_base_bdevs": 4, 00:13:59.578 "num_base_bdevs_discovered": 4, 00:13:59.578 "num_base_bdevs_operational": 4, 00:13:59.578 "base_bdevs_list": [ 00:13:59.578 { 00:13:59.578 "name": "NewBaseBdev", 00:13:59.578 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:59.578 "is_configured": true, 00:13:59.578 "data_offset": 0, 00:13:59.578 "data_size": 65536 00:13:59.578 }, 00:13:59.578 { 00:13:59.578 "name": "BaseBdev2", 00:13:59.578 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:59.578 "is_configured": true, 00:13:59.578 "data_offset": 0, 00:13:59.578 "data_size": 65536 00:13:59.578 }, 00:13:59.578 { 00:13:59.578 "name": "BaseBdev3", 00:13:59.578 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:59.578 "is_configured": true, 00:13:59.578 "data_offset": 0, 00:13:59.578 "data_size": 65536 00:13:59.578 }, 00:13:59.578 { 00:13:59.578 "name": "BaseBdev4", 00:13:59.578 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:59.578 "is_configured": true, 00:13:59.578 "data_offset": 0, 00:13:59.578 "data_size": 65536 00:13:59.578 } 00:13:59.578 ] 00:13:59.578 }' 00:13:59.578 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.578 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.871 [2024-11-20 08:47:30.692329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.871 "name": "Existed_Raid", 00:13:59.871 "aliases": [ 00:13:59.871 "6afdcf99-fcd4-444c-8acc-6e79cf966930" 00:13:59.871 ], 00:13:59.871 "product_name": "Raid Volume", 00:13:59.871 "block_size": 512, 00:13:59.871 "num_blocks": 65536, 00:13:59.871 "uuid": "6afdcf99-fcd4-444c-8acc-6e79cf966930", 00:13:59.871 "assigned_rate_limits": { 00:13:59.871 "rw_ios_per_sec": 0, 00:13:59.871 "rw_mbytes_per_sec": 0, 00:13:59.871 "r_mbytes_per_sec": 0, 00:13:59.871 "w_mbytes_per_sec": 0 00:13:59.871 }, 00:13:59.871 "claimed": false, 00:13:59.871 "zoned": false, 00:13:59.871 "supported_io_types": { 00:13:59.871 "read": true, 00:13:59.871 "write": true, 00:13:59.871 "unmap": false, 00:13:59.871 "flush": false, 00:13:59.871 "reset": true, 00:13:59.871 "nvme_admin": false, 00:13:59.871 "nvme_io": false, 00:13:59.871 "nvme_io_md": false, 00:13:59.871 "write_zeroes": true, 00:13:59.871 "zcopy": false, 00:13:59.871 "get_zone_info": false, 00:13:59.871 "zone_management": false, 00:13:59.871 "zone_append": false, 00:13:59.871 "compare": false, 00:13:59.871 "compare_and_write": false, 00:13:59.871 "abort": false, 00:13:59.871 "seek_hole": false, 00:13:59.871 "seek_data": false, 00:13:59.871 "copy": false, 00:13:59.871 "nvme_iov_md": false 00:13:59.871 }, 00:13:59.871 "memory_domains": [ 00:13:59.871 { 00:13:59.871 "dma_device_id": "system", 00:13:59.871 "dma_device_type": 1 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.871 "dma_device_type": 2 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "system", 00:13:59.871 "dma_device_type": 1 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.871 "dma_device_type": 2 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "system", 00:13:59.871 "dma_device_type": 1 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.871 "dma_device_type": 2 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "system", 00:13:59.871 "dma_device_type": 1 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.871 "dma_device_type": 2 00:13:59.871 } 00:13:59.871 ], 00:13:59.871 "driver_specific": { 00:13:59.871 "raid": { 00:13:59.871 "uuid": "6afdcf99-fcd4-444c-8acc-6e79cf966930", 00:13:59.871 "strip_size_kb": 0, 00:13:59.871 "state": "online", 00:13:59.871 "raid_level": "raid1", 00:13:59.871 "superblock": false, 00:13:59.871 "num_base_bdevs": 4, 00:13:59.871 "num_base_bdevs_discovered": 4, 00:13:59.871 "num_base_bdevs_operational": 4, 00:13:59.871 "base_bdevs_list": [ 00:13:59.871 { 00:13:59.871 "name": "NewBaseBdev", 00:13:59.871 "uuid": "d6272edb-580c-4f46-9b3a-2321a0e2ad01", 00:13:59.871 "is_configured": true, 00:13:59.871 "data_offset": 0, 00:13:59.871 "data_size": 65536 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "name": "BaseBdev2", 00:13:59.871 "uuid": "9d081984-7611-49ad-8b6c-4a259e0ab9ed", 00:13:59.871 "is_configured": true, 00:13:59.871 "data_offset": 0, 00:13:59.871 "data_size": 65536 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "name": "BaseBdev3", 00:13:59.871 "uuid": "46ecde56-5c11-4980-a8fc-482731718cdc", 00:13:59.871 "is_configured": true, 00:13:59.871 "data_offset": 0, 00:13:59.871 "data_size": 65536 00:13:59.871 }, 00:13:59.871 { 00:13:59.871 "name": "BaseBdev4", 00:13:59.871 "uuid": "315335e3-7bf6-453b-910a-c055f0dff453", 00:13:59.871 "is_configured": true, 00:13:59.871 "data_offset": 0, 00:13:59.871 "data_size": 65536 00:13:59.871 } 00:13:59.871 ] 00:13:59.871 } 00:13:59.871 } 00:13:59.871 }' 00:13:59.871 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:00.141 BaseBdev2 00:14:00.141 BaseBdev3 00:14:00.141 BaseBdev4' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.141 08:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.141 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.400 [2024-11-20 08:47:31.067955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.400 [2024-11-20 08:47:31.067990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.400 [2024-11-20 08:47:31.068083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.400 [2024-11-20 08:47:31.068495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.400 [2024-11-20 08:47:31.068523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73313 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73313 ']' 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73313 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.400 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73313 00:14:00.400 killing process with pid 73313 00:14:00.401 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.401 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.401 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73313' 00:14:00.401 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73313 00:14:00.401 [2024-11-20 08:47:31.102023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.401 08:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73313 00:14:00.659 [2024-11-20 08:47:31.450373] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.595 08:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:01.595 00:14:01.595 real 0m12.708s 00:14:01.595 user 0m21.084s 00:14:01.595 sys 0m1.795s 00:14:01.595 08:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.595 ************************************ 00:14:01.595 END TEST raid_state_function_test 00:14:01.595 ************************************ 00:14:01.595 08:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.854 08:47:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:01.854 08:47:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:01.854 08:47:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.854 08:47:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.854 ************************************ 00:14:01.854 START TEST raid_state_function_test_sb 00:14:01.854 ************************************ 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74004 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74004' 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:01.854 Process raid pid: 74004 00:14:01.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74004 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74004 ']' 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.854 08:47:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.854 [2024-11-20 08:47:32.643575] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:01.854 [2024-11-20 08:47:32.644709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.113 [2024-11-20 08:47:32.830403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.113 [2024-11-20 08:47:32.962990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.372 [2024-11-20 08:47:33.170141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.372 [2024-11-20 08:47:33.170190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 [2024-11-20 08:47:33.715870] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.939 [2024-11-20 08:47:33.715960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.939 [2024-11-20 08:47:33.715980] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.939 [2024-11-20 08:47:33.715997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.939 [2024-11-20 08:47:33.716008] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.939 [2024-11-20 08:47:33.716023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.939 [2024-11-20 08:47:33.716033] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:02.939 [2024-11-20 08:47:33.716047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.939 "name": "Existed_Raid", 00:14:02.939 "uuid": "54472adf-fad6-4a52-bb77-bc7f948022d0", 00:14:02.939 "strip_size_kb": 0, 00:14:02.939 "state": "configuring", 00:14:02.939 "raid_level": "raid1", 00:14:02.939 "superblock": true, 00:14:02.939 "num_base_bdevs": 4, 00:14:02.939 "num_base_bdevs_discovered": 0, 00:14:02.939 "num_base_bdevs_operational": 4, 00:14:02.939 "base_bdevs_list": [ 00:14:02.939 { 00:14:02.939 "name": "BaseBdev1", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "name": "BaseBdev2", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "name": "BaseBdev3", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "name": "BaseBdev4", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 } 00:14:02.939 ] 00:14:02.939 }' 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.939 08:47:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.507 [2024-11-20 08:47:34.243978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.507 [2024-11-20 08:47:34.244025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.507 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.507 [2024-11-20 08:47:34.251970] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.507 [2024-11-20 08:47:34.252024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.507 [2024-11-20 08:47:34.252040] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.507 [2024-11-20 08:47:34.252057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.507 [2024-11-20 08:47:34.252067] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.508 [2024-11-20 08:47:34.252083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.508 [2024-11-20 08:47:34.252092] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:03.508 [2024-11-20 08:47:34.252107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.508 [2024-11-20 08:47:34.297192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.508 BaseBdev1 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.508 [ 00:14:03.508 { 00:14:03.508 "name": "BaseBdev1", 00:14:03.508 "aliases": [ 00:14:03.508 "070434ef-43fa-4221-9736-d6aa4a5906b2" 00:14:03.508 ], 00:14:03.508 "product_name": "Malloc disk", 00:14:03.508 "block_size": 512, 00:14:03.508 "num_blocks": 65536, 00:14:03.508 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:03.508 "assigned_rate_limits": { 00:14:03.508 "rw_ios_per_sec": 0, 00:14:03.508 "rw_mbytes_per_sec": 0, 00:14:03.508 "r_mbytes_per_sec": 0, 00:14:03.508 "w_mbytes_per_sec": 0 00:14:03.508 }, 00:14:03.508 "claimed": true, 00:14:03.508 "claim_type": "exclusive_write", 00:14:03.508 "zoned": false, 00:14:03.508 "supported_io_types": { 00:14:03.508 "read": true, 00:14:03.508 "write": true, 00:14:03.508 "unmap": true, 00:14:03.508 "flush": true, 00:14:03.508 "reset": true, 00:14:03.508 "nvme_admin": false, 00:14:03.508 "nvme_io": false, 00:14:03.508 "nvme_io_md": false, 00:14:03.508 "write_zeroes": true, 00:14:03.508 "zcopy": true, 00:14:03.508 "get_zone_info": false, 00:14:03.508 "zone_management": false, 00:14:03.508 "zone_append": false, 00:14:03.508 "compare": false, 00:14:03.508 "compare_and_write": false, 00:14:03.508 "abort": true, 00:14:03.508 "seek_hole": false, 00:14:03.508 "seek_data": false, 00:14:03.508 "copy": true, 00:14:03.508 "nvme_iov_md": false 00:14:03.508 }, 00:14:03.508 "memory_domains": [ 00:14:03.508 { 00:14:03.508 "dma_device_id": "system", 00:14:03.508 "dma_device_type": 1 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.508 "dma_device_type": 2 00:14:03.508 } 00:14:03.508 ], 00:14:03.508 "driver_specific": {} 00:14:03.508 } 00:14:03.508 ] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.508 "name": "Existed_Raid", 00:14:03.508 "uuid": "c7657629-47a4-4f6f-8903-a9e459b1b7f5", 00:14:03.508 "strip_size_kb": 0, 00:14:03.508 "state": "configuring", 00:14:03.508 "raid_level": "raid1", 00:14:03.508 "superblock": true, 00:14:03.508 "num_base_bdevs": 4, 00:14:03.508 "num_base_bdevs_discovered": 1, 00:14:03.508 "num_base_bdevs_operational": 4, 00:14:03.508 "base_bdevs_list": [ 00:14:03.508 { 00:14:03.508 "name": "BaseBdev1", 00:14:03.508 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:03.508 "is_configured": true, 00:14:03.508 "data_offset": 2048, 00:14:03.508 "data_size": 63488 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "name": "BaseBdev2", 00:14:03.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.508 "is_configured": false, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 0 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "name": "BaseBdev3", 00:14:03.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.508 "is_configured": false, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 0 00:14:03.508 }, 00:14:03.508 { 00:14:03.508 "name": "BaseBdev4", 00:14:03.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.508 "is_configured": false, 00:14:03.508 "data_offset": 0, 00:14:03.508 "data_size": 0 00:14:03.508 } 00:14:03.508 ] 00:14:03.508 }' 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.508 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.075 [2024-11-20 08:47:34.841380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.075 [2024-11-20 08:47:34.841446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.075 [2024-11-20 08:47:34.853444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.075 [2024-11-20 08:47:34.856087] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.075 [2024-11-20 08:47:34.856281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.075 [2024-11-20 08:47:34.856437] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.075 [2024-11-20 08:47:34.856505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.075 [2024-11-20 08:47:34.856673] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:04.075 [2024-11-20 08:47:34.856733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.075 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.076 "name": "Existed_Raid", 00:14:04.076 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:04.076 "strip_size_kb": 0, 00:14:04.076 "state": "configuring", 00:14:04.076 "raid_level": "raid1", 00:14:04.076 "superblock": true, 00:14:04.076 "num_base_bdevs": 4, 00:14:04.076 "num_base_bdevs_discovered": 1, 00:14:04.076 "num_base_bdevs_operational": 4, 00:14:04.076 "base_bdevs_list": [ 00:14:04.076 { 00:14:04.076 "name": "BaseBdev1", 00:14:04.076 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:04.076 "is_configured": true, 00:14:04.076 "data_offset": 2048, 00:14:04.076 "data_size": 63488 00:14:04.076 }, 00:14:04.076 { 00:14:04.076 "name": "BaseBdev2", 00:14:04.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.076 "is_configured": false, 00:14:04.076 "data_offset": 0, 00:14:04.076 "data_size": 0 00:14:04.076 }, 00:14:04.076 { 00:14:04.076 "name": "BaseBdev3", 00:14:04.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.076 "is_configured": false, 00:14:04.076 "data_offset": 0, 00:14:04.076 "data_size": 0 00:14:04.076 }, 00:14:04.076 { 00:14:04.076 "name": "BaseBdev4", 00:14:04.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.076 "is_configured": false, 00:14:04.076 "data_offset": 0, 00:14:04.076 "data_size": 0 00:14:04.076 } 00:14:04.076 ] 00:14:04.076 }' 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.076 08:47:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.643 [2024-11-20 08:47:35.395382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.643 BaseBdev2 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.643 [ 00:14:04.643 { 00:14:04.643 "name": "BaseBdev2", 00:14:04.643 "aliases": [ 00:14:04.643 "f3e737c4-065e-488a-a85e-66bbc7d799f8" 00:14:04.643 ], 00:14:04.643 "product_name": "Malloc disk", 00:14:04.643 "block_size": 512, 00:14:04.643 "num_blocks": 65536, 00:14:04.643 "uuid": "f3e737c4-065e-488a-a85e-66bbc7d799f8", 00:14:04.643 "assigned_rate_limits": { 00:14:04.643 "rw_ios_per_sec": 0, 00:14:04.643 "rw_mbytes_per_sec": 0, 00:14:04.643 "r_mbytes_per_sec": 0, 00:14:04.643 "w_mbytes_per_sec": 0 00:14:04.643 }, 00:14:04.643 "claimed": true, 00:14:04.643 "claim_type": "exclusive_write", 00:14:04.643 "zoned": false, 00:14:04.643 "supported_io_types": { 00:14:04.643 "read": true, 00:14:04.643 "write": true, 00:14:04.643 "unmap": true, 00:14:04.643 "flush": true, 00:14:04.643 "reset": true, 00:14:04.643 "nvme_admin": false, 00:14:04.643 "nvme_io": false, 00:14:04.643 "nvme_io_md": false, 00:14:04.643 "write_zeroes": true, 00:14:04.643 "zcopy": true, 00:14:04.643 "get_zone_info": false, 00:14:04.643 "zone_management": false, 00:14:04.643 "zone_append": false, 00:14:04.643 "compare": false, 00:14:04.643 "compare_and_write": false, 00:14:04.643 "abort": true, 00:14:04.643 "seek_hole": false, 00:14:04.643 "seek_data": false, 00:14:04.643 "copy": true, 00:14:04.643 "nvme_iov_md": false 00:14:04.643 }, 00:14:04.643 "memory_domains": [ 00:14:04.643 { 00:14:04.643 "dma_device_id": "system", 00:14:04.643 "dma_device_type": 1 00:14:04.643 }, 00:14:04.643 { 00:14:04.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.643 "dma_device_type": 2 00:14:04.643 } 00:14:04.643 ], 00:14:04.643 "driver_specific": {} 00:14:04.643 } 00:14:04.643 ] 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:04.643 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.644 "name": "Existed_Raid", 00:14:04.644 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:04.644 "strip_size_kb": 0, 00:14:04.644 "state": "configuring", 00:14:04.644 "raid_level": "raid1", 00:14:04.644 "superblock": true, 00:14:04.644 "num_base_bdevs": 4, 00:14:04.644 "num_base_bdevs_discovered": 2, 00:14:04.644 "num_base_bdevs_operational": 4, 00:14:04.644 "base_bdevs_list": [ 00:14:04.644 { 00:14:04.644 "name": "BaseBdev1", 00:14:04.644 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:04.644 "is_configured": true, 00:14:04.644 "data_offset": 2048, 00:14:04.644 "data_size": 63488 00:14:04.644 }, 00:14:04.644 { 00:14:04.644 "name": "BaseBdev2", 00:14:04.644 "uuid": "f3e737c4-065e-488a-a85e-66bbc7d799f8", 00:14:04.644 "is_configured": true, 00:14:04.644 "data_offset": 2048, 00:14:04.644 "data_size": 63488 00:14:04.644 }, 00:14:04.644 { 00:14:04.644 "name": "BaseBdev3", 00:14:04.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.644 "is_configured": false, 00:14:04.644 "data_offset": 0, 00:14:04.644 "data_size": 0 00:14:04.644 }, 00:14:04.644 { 00:14:04.644 "name": "BaseBdev4", 00:14:04.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.644 "is_configured": false, 00:14:04.644 "data_offset": 0, 00:14:04.644 "data_size": 0 00:14:04.644 } 00:14:04.644 ] 00:14:04.644 }' 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.644 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.265 [2024-11-20 08:47:35.990536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.265 BaseBdev3 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.265 08:47:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.265 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.265 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.265 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.265 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.265 [ 00:14:05.265 { 00:14:05.265 "name": "BaseBdev3", 00:14:05.265 "aliases": [ 00:14:05.265 "42b45127-8091-46e7-80b5-1bf7f8eb9ff3" 00:14:05.265 ], 00:14:05.265 "product_name": "Malloc disk", 00:14:05.265 "block_size": 512, 00:14:05.265 "num_blocks": 65536, 00:14:05.265 "uuid": "42b45127-8091-46e7-80b5-1bf7f8eb9ff3", 00:14:05.265 "assigned_rate_limits": { 00:14:05.265 "rw_ios_per_sec": 0, 00:14:05.265 "rw_mbytes_per_sec": 0, 00:14:05.265 "r_mbytes_per_sec": 0, 00:14:05.265 "w_mbytes_per_sec": 0 00:14:05.265 }, 00:14:05.265 "claimed": true, 00:14:05.265 "claim_type": "exclusive_write", 00:14:05.265 "zoned": false, 00:14:05.265 "supported_io_types": { 00:14:05.265 "read": true, 00:14:05.266 "write": true, 00:14:05.266 "unmap": true, 00:14:05.266 "flush": true, 00:14:05.266 "reset": true, 00:14:05.266 "nvme_admin": false, 00:14:05.266 "nvme_io": false, 00:14:05.266 "nvme_io_md": false, 00:14:05.266 "write_zeroes": true, 00:14:05.266 "zcopy": true, 00:14:05.266 "get_zone_info": false, 00:14:05.266 "zone_management": false, 00:14:05.266 "zone_append": false, 00:14:05.266 "compare": false, 00:14:05.266 "compare_and_write": false, 00:14:05.266 "abort": true, 00:14:05.266 "seek_hole": false, 00:14:05.266 "seek_data": false, 00:14:05.266 "copy": true, 00:14:05.266 "nvme_iov_md": false 00:14:05.266 }, 00:14:05.266 "memory_domains": [ 00:14:05.266 { 00:14:05.266 "dma_device_id": "system", 00:14:05.266 "dma_device_type": 1 00:14:05.266 }, 00:14:05.266 { 00:14:05.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.266 "dma_device_type": 2 00:14:05.266 } 00:14:05.266 ], 00:14:05.266 "driver_specific": {} 00:14:05.266 } 00:14:05.266 ] 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.266 "name": "Existed_Raid", 00:14:05.266 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:05.266 "strip_size_kb": 0, 00:14:05.266 "state": "configuring", 00:14:05.266 "raid_level": "raid1", 00:14:05.266 "superblock": true, 00:14:05.266 "num_base_bdevs": 4, 00:14:05.266 "num_base_bdevs_discovered": 3, 00:14:05.266 "num_base_bdevs_operational": 4, 00:14:05.266 "base_bdevs_list": [ 00:14:05.266 { 00:14:05.266 "name": "BaseBdev1", 00:14:05.266 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:05.266 "is_configured": true, 00:14:05.266 "data_offset": 2048, 00:14:05.266 "data_size": 63488 00:14:05.266 }, 00:14:05.266 { 00:14:05.266 "name": "BaseBdev2", 00:14:05.266 "uuid": "f3e737c4-065e-488a-a85e-66bbc7d799f8", 00:14:05.266 "is_configured": true, 00:14:05.266 "data_offset": 2048, 00:14:05.266 "data_size": 63488 00:14:05.266 }, 00:14:05.266 { 00:14:05.266 "name": "BaseBdev3", 00:14:05.266 "uuid": "42b45127-8091-46e7-80b5-1bf7f8eb9ff3", 00:14:05.266 "is_configured": true, 00:14:05.266 "data_offset": 2048, 00:14:05.266 "data_size": 63488 00:14:05.266 }, 00:14:05.266 { 00:14:05.266 "name": "BaseBdev4", 00:14:05.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.266 "is_configured": false, 00:14:05.266 "data_offset": 0, 00:14:05.266 "data_size": 0 00:14:05.266 } 00:14:05.266 ] 00:14:05.266 }' 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.266 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.833 [2024-11-20 08:47:36.597230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.833 [2024-11-20 08:47:36.597561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.833 [2024-11-20 08:47:36.597583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.833 BaseBdev4 00:14:05.833 [2024-11-20 08:47:36.597922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:05.833 [2024-11-20 08:47:36.598127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.833 [2024-11-20 08:47:36.598172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:05.833 [2024-11-20 08:47:36.598360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.833 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.833 [ 00:14:05.833 { 00:14:05.833 "name": "BaseBdev4", 00:14:05.833 "aliases": [ 00:14:05.833 "c59bca6c-d19c-43ed-819b-ca29fbc41c6f" 00:14:05.833 ], 00:14:05.833 "product_name": "Malloc disk", 00:14:05.833 "block_size": 512, 00:14:05.833 "num_blocks": 65536, 00:14:05.833 "uuid": "c59bca6c-d19c-43ed-819b-ca29fbc41c6f", 00:14:05.833 "assigned_rate_limits": { 00:14:05.833 "rw_ios_per_sec": 0, 00:14:05.833 "rw_mbytes_per_sec": 0, 00:14:05.833 "r_mbytes_per_sec": 0, 00:14:05.833 "w_mbytes_per_sec": 0 00:14:05.833 }, 00:14:05.833 "claimed": true, 00:14:05.833 "claim_type": "exclusive_write", 00:14:05.833 "zoned": false, 00:14:05.833 "supported_io_types": { 00:14:05.833 "read": true, 00:14:05.833 "write": true, 00:14:05.833 "unmap": true, 00:14:05.833 "flush": true, 00:14:05.833 "reset": true, 00:14:05.833 "nvme_admin": false, 00:14:05.833 "nvme_io": false, 00:14:05.833 "nvme_io_md": false, 00:14:05.833 "write_zeroes": true, 00:14:05.833 "zcopy": true, 00:14:05.833 "get_zone_info": false, 00:14:05.833 "zone_management": false, 00:14:05.833 "zone_append": false, 00:14:05.833 "compare": false, 00:14:05.833 "compare_and_write": false, 00:14:05.833 "abort": true, 00:14:05.833 "seek_hole": false, 00:14:05.833 "seek_data": false, 00:14:05.834 "copy": true, 00:14:05.834 "nvme_iov_md": false 00:14:05.834 }, 00:14:05.834 "memory_domains": [ 00:14:05.834 { 00:14:05.834 "dma_device_id": "system", 00:14:05.834 "dma_device_type": 1 00:14:05.834 }, 00:14:05.834 { 00:14:05.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.834 "dma_device_type": 2 00:14:05.834 } 00:14:05.834 ], 00:14:05.834 "driver_specific": {} 00:14:05.834 } 00:14:05.834 ] 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.834 "name": "Existed_Raid", 00:14:05.834 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:05.834 "strip_size_kb": 0, 00:14:05.834 "state": "online", 00:14:05.834 "raid_level": "raid1", 00:14:05.834 "superblock": true, 00:14:05.834 "num_base_bdevs": 4, 00:14:05.834 "num_base_bdevs_discovered": 4, 00:14:05.834 "num_base_bdevs_operational": 4, 00:14:05.834 "base_bdevs_list": [ 00:14:05.834 { 00:14:05.834 "name": "BaseBdev1", 00:14:05.834 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:05.834 "is_configured": true, 00:14:05.834 "data_offset": 2048, 00:14:05.834 "data_size": 63488 00:14:05.834 }, 00:14:05.834 { 00:14:05.834 "name": "BaseBdev2", 00:14:05.834 "uuid": "f3e737c4-065e-488a-a85e-66bbc7d799f8", 00:14:05.834 "is_configured": true, 00:14:05.834 "data_offset": 2048, 00:14:05.834 "data_size": 63488 00:14:05.834 }, 00:14:05.834 { 00:14:05.834 "name": "BaseBdev3", 00:14:05.834 "uuid": "42b45127-8091-46e7-80b5-1bf7f8eb9ff3", 00:14:05.834 "is_configured": true, 00:14:05.834 "data_offset": 2048, 00:14:05.834 "data_size": 63488 00:14:05.834 }, 00:14:05.834 { 00:14:05.834 "name": "BaseBdev4", 00:14:05.834 "uuid": "c59bca6c-d19c-43ed-819b-ca29fbc41c6f", 00:14:05.834 "is_configured": true, 00:14:05.834 "data_offset": 2048, 00:14:05.834 "data_size": 63488 00:14:05.834 } 00:14:05.834 ] 00:14:05.834 }' 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.834 08:47:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.402 [2024-11-20 08:47:37.141839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.402 "name": "Existed_Raid", 00:14:06.402 "aliases": [ 00:14:06.402 "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1" 00:14:06.402 ], 00:14:06.402 "product_name": "Raid Volume", 00:14:06.402 "block_size": 512, 00:14:06.402 "num_blocks": 63488, 00:14:06.402 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:06.402 "assigned_rate_limits": { 00:14:06.402 "rw_ios_per_sec": 0, 00:14:06.402 "rw_mbytes_per_sec": 0, 00:14:06.402 "r_mbytes_per_sec": 0, 00:14:06.402 "w_mbytes_per_sec": 0 00:14:06.402 }, 00:14:06.402 "claimed": false, 00:14:06.402 "zoned": false, 00:14:06.402 "supported_io_types": { 00:14:06.402 "read": true, 00:14:06.402 "write": true, 00:14:06.402 "unmap": false, 00:14:06.402 "flush": false, 00:14:06.402 "reset": true, 00:14:06.402 "nvme_admin": false, 00:14:06.402 "nvme_io": false, 00:14:06.402 "nvme_io_md": false, 00:14:06.402 "write_zeroes": true, 00:14:06.402 "zcopy": false, 00:14:06.402 "get_zone_info": false, 00:14:06.402 "zone_management": false, 00:14:06.402 "zone_append": false, 00:14:06.402 "compare": false, 00:14:06.402 "compare_and_write": false, 00:14:06.402 "abort": false, 00:14:06.402 "seek_hole": false, 00:14:06.402 "seek_data": false, 00:14:06.402 "copy": false, 00:14:06.402 "nvme_iov_md": false 00:14:06.402 }, 00:14:06.402 "memory_domains": [ 00:14:06.402 { 00:14:06.402 "dma_device_id": "system", 00:14:06.402 "dma_device_type": 1 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.402 "dma_device_type": 2 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "system", 00:14:06.402 "dma_device_type": 1 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.402 "dma_device_type": 2 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "system", 00:14:06.402 "dma_device_type": 1 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.402 "dma_device_type": 2 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "system", 00:14:06.402 "dma_device_type": 1 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.402 "dma_device_type": 2 00:14:06.402 } 00:14:06.402 ], 00:14:06.402 "driver_specific": { 00:14:06.402 "raid": { 00:14:06.402 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:06.402 "strip_size_kb": 0, 00:14:06.402 "state": "online", 00:14:06.402 "raid_level": "raid1", 00:14:06.402 "superblock": true, 00:14:06.402 "num_base_bdevs": 4, 00:14:06.402 "num_base_bdevs_discovered": 4, 00:14:06.402 "num_base_bdevs_operational": 4, 00:14:06.402 "base_bdevs_list": [ 00:14:06.402 { 00:14:06.402 "name": "BaseBdev1", 00:14:06.402 "uuid": "070434ef-43fa-4221-9736-d6aa4a5906b2", 00:14:06.402 "is_configured": true, 00:14:06.402 "data_offset": 2048, 00:14:06.402 "data_size": 63488 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "name": "BaseBdev2", 00:14:06.402 "uuid": "f3e737c4-065e-488a-a85e-66bbc7d799f8", 00:14:06.402 "is_configured": true, 00:14:06.402 "data_offset": 2048, 00:14:06.402 "data_size": 63488 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "name": "BaseBdev3", 00:14:06.402 "uuid": "42b45127-8091-46e7-80b5-1bf7f8eb9ff3", 00:14:06.402 "is_configured": true, 00:14:06.402 "data_offset": 2048, 00:14:06.402 "data_size": 63488 00:14:06.402 }, 00:14:06.402 { 00:14:06.402 "name": "BaseBdev4", 00:14:06.402 "uuid": "c59bca6c-d19c-43ed-819b-ca29fbc41c6f", 00:14:06.402 "is_configured": true, 00:14:06.402 "data_offset": 2048, 00:14:06.402 "data_size": 63488 00:14:06.402 } 00:14:06.402 ] 00:14:06.402 } 00:14:06.402 } 00:14:06.402 }' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:06.402 BaseBdev2 00:14:06.402 BaseBdev3 00:14:06.402 BaseBdev4' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.402 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.661 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.661 [2024-11-20 08:47:37.505578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.920 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.921 "name": "Existed_Raid", 00:14:06.921 "uuid": "4a6f0ac0-75bd-41a3-a213-e4b75d5caec1", 00:14:06.921 "strip_size_kb": 0, 00:14:06.921 "state": "online", 00:14:06.921 "raid_level": "raid1", 00:14:06.921 "superblock": true, 00:14:06.921 "num_base_bdevs": 4, 00:14:06.921 "num_base_bdevs_discovered": 3, 00:14:06.921 "num_base_bdevs_operational": 3, 00:14:06.921 "base_bdevs_list": [ 00:14:06.921 { 00:14:06.921 "name": null, 00:14:06.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.921 "is_configured": false, 00:14:06.921 "data_offset": 0, 00:14:06.921 "data_size": 63488 00:14:06.921 }, 00:14:06.921 { 00:14:06.921 "name": "BaseBdev2", 00:14:06.921 "uuid": "f3e737c4-065e-488a-a85e-66bbc7d799f8", 00:14:06.921 "is_configured": true, 00:14:06.921 "data_offset": 2048, 00:14:06.921 "data_size": 63488 00:14:06.921 }, 00:14:06.921 { 00:14:06.921 "name": "BaseBdev3", 00:14:06.921 "uuid": "42b45127-8091-46e7-80b5-1bf7f8eb9ff3", 00:14:06.921 "is_configured": true, 00:14:06.921 "data_offset": 2048, 00:14:06.921 "data_size": 63488 00:14:06.921 }, 00:14:06.921 { 00:14:06.921 "name": "BaseBdev4", 00:14:06.921 "uuid": "c59bca6c-d19c-43ed-819b-ca29fbc41c6f", 00:14:06.921 "is_configured": true, 00:14:06.921 "data_offset": 2048, 00:14:06.921 "data_size": 63488 00:14:06.921 } 00:14:06.921 ] 00:14:06.921 }' 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.921 08:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.488 [2024-11-20 08:47:38.167994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.488 [2024-11-20 08:47:38.303103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.488 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 [2024-11-20 08:47:38.447014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:07.747 [2024-11-20 08:47:38.447286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.747 [2024-11-20 08:47:38.530399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.747 [2024-11-20 08:47:38.530472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.747 [2024-11-20 08:47:38.530493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 BaseBdev2 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.747 [ 00:14:07.747 { 00:14:07.747 "name": "BaseBdev2", 00:14:07.747 "aliases": [ 00:14:07.747 "132b14cd-b608-4f25-ba09-b6685401f38a" 00:14:07.747 ], 00:14:07.747 "product_name": "Malloc disk", 00:14:07.747 "block_size": 512, 00:14:07.747 "num_blocks": 65536, 00:14:07.747 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:07.747 "assigned_rate_limits": { 00:14:07.747 "rw_ios_per_sec": 0, 00:14:07.747 "rw_mbytes_per_sec": 0, 00:14:07.747 "r_mbytes_per_sec": 0, 00:14:07.747 "w_mbytes_per_sec": 0 00:14:07.747 }, 00:14:07.747 "claimed": false, 00:14:07.747 "zoned": false, 00:14:07.747 "supported_io_types": { 00:14:07.747 "read": true, 00:14:07.747 "write": true, 00:14:07.747 "unmap": true, 00:14:07.747 "flush": true, 00:14:07.747 "reset": true, 00:14:07.747 "nvme_admin": false, 00:14:07.747 "nvme_io": false, 00:14:07.747 "nvme_io_md": false, 00:14:07.747 "write_zeroes": true, 00:14:07.747 "zcopy": true, 00:14:07.747 "get_zone_info": false, 00:14:07.747 "zone_management": false, 00:14:07.747 "zone_append": false, 00:14:07.747 "compare": false, 00:14:07.747 "compare_and_write": false, 00:14:07.747 "abort": true, 00:14:07.747 "seek_hole": false, 00:14:07.747 "seek_data": false, 00:14:07.747 "copy": true, 00:14:07.747 "nvme_iov_md": false 00:14:07.747 }, 00:14:07.747 "memory_domains": [ 00:14:07.747 { 00:14:07.747 "dma_device_id": "system", 00:14:07.747 "dma_device_type": 1 00:14:07.747 }, 00:14:07.747 { 00:14:07.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.747 "dma_device_type": 2 00:14:07.747 } 00:14:07.747 ], 00:14:07.747 "driver_specific": {} 00:14:07.747 } 00:14:07.747 ] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.747 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 BaseBdev3 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 [ 00:14:08.006 { 00:14:08.006 "name": "BaseBdev3", 00:14:08.006 "aliases": [ 00:14:08.006 "88db2daa-db02-418f-98de-1957c66f7426" 00:14:08.006 ], 00:14:08.006 "product_name": "Malloc disk", 00:14:08.006 "block_size": 512, 00:14:08.006 "num_blocks": 65536, 00:14:08.006 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:08.006 "assigned_rate_limits": { 00:14:08.006 "rw_ios_per_sec": 0, 00:14:08.006 "rw_mbytes_per_sec": 0, 00:14:08.006 "r_mbytes_per_sec": 0, 00:14:08.006 "w_mbytes_per_sec": 0 00:14:08.006 }, 00:14:08.006 "claimed": false, 00:14:08.006 "zoned": false, 00:14:08.006 "supported_io_types": { 00:14:08.006 "read": true, 00:14:08.006 "write": true, 00:14:08.006 "unmap": true, 00:14:08.006 "flush": true, 00:14:08.006 "reset": true, 00:14:08.006 "nvme_admin": false, 00:14:08.006 "nvme_io": false, 00:14:08.006 "nvme_io_md": false, 00:14:08.006 "write_zeroes": true, 00:14:08.006 "zcopy": true, 00:14:08.006 "get_zone_info": false, 00:14:08.006 "zone_management": false, 00:14:08.006 "zone_append": false, 00:14:08.006 "compare": false, 00:14:08.006 "compare_and_write": false, 00:14:08.006 "abort": true, 00:14:08.006 "seek_hole": false, 00:14:08.006 "seek_data": false, 00:14:08.006 "copy": true, 00:14:08.006 "nvme_iov_md": false 00:14:08.006 }, 00:14:08.006 "memory_domains": [ 00:14:08.006 { 00:14:08.006 "dma_device_id": "system", 00:14:08.006 "dma_device_type": 1 00:14:08.006 }, 00:14:08.006 { 00:14:08.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.006 "dma_device_type": 2 00:14:08.006 } 00:14:08.006 ], 00:14:08.006 "driver_specific": {} 00:14:08.006 } 00:14:08.006 ] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 BaseBdev4 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.006 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.006 [ 00:14:08.006 { 00:14:08.006 "name": "BaseBdev4", 00:14:08.006 "aliases": [ 00:14:08.007 "18f119e1-7360-4d0d-9ad3-be141c4db7b3" 00:14:08.007 ], 00:14:08.007 "product_name": "Malloc disk", 00:14:08.007 "block_size": 512, 00:14:08.007 "num_blocks": 65536, 00:14:08.007 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:08.007 "assigned_rate_limits": { 00:14:08.007 "rw_ios_per_sec": 0, 00:14:08.007 "rw_mbytes_per_sec": 0, 00:14:08.007 "r_mbytes_per_sec": 0, 00:14:08.007 "w_mbytes_per_sec": 0 00:14:08.007 }, 00:14:08.007 "claimed": false, 00:14:08.007 "zoned": false, 00:14:08.007 "supported_io_types": { 00:14:08.007 "read": true, 00:14:08.007 "write": true, 00:14:08.007 "unmap": true, 00:14:08.007 "flush": true, 00:14:08.007 "reset": true, 00:14:08.007 "nvme_admin": false, 00:14:08.007 "nvme_io": false, 00:14:08.007 "nvme_io_md": false, 00:14:08.007 "write_zeroes": true, 00:14:08.007 "zcopy": true, 00:14:08.007 "get_zone_info": false, 00:14:08.007 "zone_management": false, 00:14:08.007 "zone_append": false, 00:14:08.007 "compare": false, 00:14:08.007 "compare_and_write": false, 00:14:08.007 "abort": true, 00:14:08.007 "seek_hole": false, 00:14:08.007 "seek_data": false, 00:14:08.007 "copy": true, 00:14:08.007 "nvme_iov_md": false 00:14:08.007 }, 00:14:08.007 "memory_domains": [ 00:14:08.007 { 00:14:08.007 "dma_device_id": "system", 00:14:08.007 "dma_device_type": 1 00:14:08.007 }, 00:14:08.007 { 00:14:08.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.007 "dma_device_type": 2 00:14:08.007 } 00:14:08.007 ], 00:14:08.007 "driver_specific": {} 00:14:08.007 } 00:14:08.007 ] 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.007 [2024-11-20 08:47:38.813634] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.007 [2024-11-20 08:47:38.813695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.007 [2024-11-20 08:47:38.813726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.007 [2024-11-20 08:47:38.816170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.007 [2024-11-20 08:47:38.816241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.007 "name": "Existed_Raid", 00:14:08.007 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:08.007 "strip_size_kb": 0, 00:14:08.007 "state": "configuring", 00:14:08.007 "raid_level": "raid1", 00:14:08.007 "superblock": true, 00:14:08.007 "num_base_bdevs": 4, 00:14:08.007 "num_base_bdevs_discovered": 3, 00:14:08.007 "num_base_bdevs_operational": 4, 00:14:08.007 "base_bdevs_list": [ 00:14:08.007 { 00:14:08.007 "name": "BaseBdev1", 00:14:08.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.007 "is_configured": false, 00:14:08.007 "data_offset": 0, 00:14:08.007 "data_size": 0 00:14:08.007 }, 00:14:08.007 { 00:14:08.007 "name": "BaseBdev2", 00:14:08.007 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:08.007 "is_configured": true, 00:14:08.007 "data_offset": 2048, 00:14:08.007 "data_size": 63488 00:14:08.007 }, 00:14:08.007 { 00:14:08.007 "name": "BaseBdev3", 00:14:08.007 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:08.007 "is_configured": true, 00:14:08.007 "data_offset": 2048, 00:14:08.007 "data_size": 63488 00:14:08.007 }, 00:14:08.007 { 00:14:08.007 "name": "BaseBdev4", 00:14:08.007 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:08.007 "is_configured": true, 00:14:08.007 "data_offset": 2048, 00:14:08.007 "data_size": 63488 00:14:08.007 } 00:14:08.007 ] 00:14:08.007 }' 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.007 08:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 [2024-11-20 08:47:39.337800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.574 "name": "Existed_Raid", 00:14:08.574 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:08.574 "strip_size_kb": 0, 00:14:08.574 "state": "configuring", 00:14:08.574 "raid_level": "raid1", 00:14:08.574 "superblock": true, 00:14:08.574 "num_base_bdevs": 4, 00:14:08.574 "num_base_bdevs_discovered": 2, 00:14:08.574 "num_base_bdevs_operational": 4, 00:14:08.574 "base_bdevs_list": [ 00:14:08.574 { 00:14:08.574 "name": "BaseBdev1", 00:14:08.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.574 "is_configured": false, 00:14:08.574 "data_offset": 0, 00:14:08.574 "data_size": 0 00:14:08.574 }, 00:14:08.574 { 00:14:08.574 "name": null, 00:14:08.574 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:08.574 "is_configured": false, 00:14:08.574 "data_offset": 0, 00:14:08.574 "data_size": 63488 00:14:08.574 }, 00:14:08.574 { 00:14:08.574 "name": "BaseBdev3", 00:14:08.574 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:08.574 "is_configured": true, 00:14:08.574 "data_offset": 2048, 00:14:08.574 "data_size": 63488 00:14:08.574 }, 00:14:08.574 { 00:14:08.574 "name": "BaseBdev4", 00:14:08.574 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:08.574 "is_configured": true, 00:14:08.574 "data_offset": 2048, 00:14:08.574 "data_size": 63488 00:14:08.574 } 00:14:08.574 ] 00:14:08.574 }' 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.574 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.141 [2024-11-20 08:47:39.947615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.141 BaseBdev1 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.141 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.141 [ 00:14:09.141 { 00:14:09.141 "name": "BaseBdev1", 00:14:09.141 "aliases": [ 00:14:09.141 "718a5a94-313b-400b-ae5f-5e362785544d" 00:14:09.141 ], 00:14:09.141 "product_name": "Malloc disk", 00:14:09.141 "block_size": 512, 00:14:09.141 "num_blocks": 65536, 00:14:09.141 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:09.141 "assigned_rate_limits": { 00:14:09.141 "rw_ios_per_sec": 0, 00:14:09.141 "rw_mbytes_per_sec": 0, 00:14:09.141 "r_mbytes_per_sec": 0, 00:14:09.141 "w_mbytes_per_sec": 0 00:14:09.141 }, 00:14:09.142 "claimed": true, 00:14:09.142 "claim_type": "exclusive_write", 00:14:09.142 "zoned": false, 00:14:09.142 "supported_io_types": { 00:14:09.142 "read": true, 00:14:09.142 "write": true, 00:14:09.142 "unmap": true, 00:14:09.142 "flush": true, 00:14:09.142 "reset": true, 00:14:09.142 "nvme_admin": false, 00:14:09.142 "nvme_io": false, 00:14:09.142 "nvme_io_md": false, 00:14:09.142 "write_zeroes": true, 00:14:09.142 "zcopy": true, 00:14:09.142 "get_zone_info": false, 00:14:09.142 "zone_management": false, 00:14:09.142 "zone_append": false, 00:14:09.142 "compare": false, 00:14:09.142 "compare_and_write": false, 00:14:09.142 "abort": true, 00:14:09.142 "seek_hole": false, 00:14:09.142 "seek_data": false, 00:14:09.142 "copy": true, 00:14:09.142 "nvme_iov_md": false 00:14:09.142 }, 00:14:09.142 "memory_domains": [ 00:14:09.142 { 00:14:09.142 "dma_device_id": "system", 00:14:09.142 "dma_device_type": 1 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.142 "dma_device_type": 2 00:14:09.142 } 00:14:09.142 ], 00:14:09.142 "driver_specific": {} 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.142 08:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.142 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.142 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.142 "name": "Existed_Raid", 00:14:09.142 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:09.142 "strip_size_kb": 0, 00:14:09.142 "state": "configuring", 00:14:09.142 "raid_level": "raid1", 00:14:09.142 "superblock": true, 00:14:09.142 "num_base_bdevs": 4, 00:14:09.142 "num_base_bdevs_discovered": 3, 00:14:09.142 "num_base_bdevs_operational": 4, 00:14:09.142 "base_bdevs_list": [ 00:14:09.142 { 00:14:09.142 "name": "BaseBdev1", 00:14:09.142 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:09.142 "is_configured": true, 00:14:09.142 "data_offset": 2048, 00:14:09.142 "data_size": 63488 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "name": null, 00:14:09.142 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:09.142 "is_configured": false, 00:14:09.142 "data_offset": 0, 00:14:09.142 "data_size": 63488 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "name": "BaseBdev3", 00:14:09.142 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:09.142 "is_configured": true, 00:14:09.142 "data_offset": 2048, 00:14:09.142 "data_size": 63488 00:14:09.142 }, 00:14:09.142 { 00:14:09.142 "name": "BaseBdev4", 00:14:09.142 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:09.142 "is_configured": true, 00:14:09.142 "data_offset": 2048, 00:14:09.142 "data_size": 63488 00:14:09.142 } 00:14:09.142 ] 00:14:09.142 }' 00:14:09.142 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.142 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.710 [2024-11-20 08:47:40.583882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.710 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.969 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.969 "name": "Existed_Raid", 00:14:09.969 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:09.969 "strip_size_kb": 0, 00:14:09.969 "state": "configuring", 00:14:09.969 "raid_level": "raid1", 00:14:09.969 "superblock": true, 00:14:09.969 "num_base_bdevs": 4, 00:14:09.969 "num_base_bdevs_discovered": 2, 00:14:09.969 "num_base_bdevs_operational": 4, 00:14:09.969 "base_bdevs_list": [ 00:14:09.969 { 00:14:09.969 "name": "BaseBdev1", 00:14:09.969 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:09.969 "is_configured": true, 00:14:09.969 "data_offset": 2048, 00:14:09.969 "data_size": 63488 00:14:09.969 }, 00:14:09.969 { 00:14:09.969 "name": null, 00:14:09.969 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:09.969 "is_configured": false, 00:14:09.969 "data_offset": 0, 00:14:09.969 "data_size": 63488 00:14:09.969 }, 00:14:09.969 { 00:14:09.969 "name": null, 00:14:09.969 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:09.969 "is_configured": false, 00:14:09.969 "data_offset": 0, 00:14:09.969 "data_size": 63488 00:14:09.969 }, 00:14:09.969 { 00:14:09.969 "name": "BaseBdev4", 00:14:09.969 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:09.969 "is_configured": true, 00:14:09.969 "data_offset": 2048, 00:14:09.969 "data_size": 63488 00:14:09.969 } 00:14:09.969 ] 00:14:09.969 }' 00:14:09.969 08:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.969 08:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.227 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.227 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.227 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.227 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.516 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.516 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:10.516 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:10.516 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.516 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.516 [2024-11-20 08:47:41.179999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.517 "name": "Existed_Raid", 00:14:10.517 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:10.517 "strip_size_kb": 0, 00:14:10.517 "state": "configuring", 00:14:10.517 "raid_level": "raid1", 00:14:10.517 "superblock": true, 00:14:10.517 "num_base_bdevs": 4, 00:14:10.517 "num_base_bdevs_discovered": 3, 00:14:10.517 "num_base_bdevs_operational": 4, 00:14:10.517 "base_bdevs_list": [ 00:14:10.517 { 00:14:10.517 "name": "BaseBdev1", 00:14:10.517 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:10.517 "is_configured": true, 00:14:10.517 "data_offset": 2048, 00:14:10.517 "data_size": 63488 00:14:10.517 }, 00:14:10.517 { 00:14:10.517 "name": null, 00:14:10.517 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:10.517 "is_configured": false, 00:14:10.517 "data_offset": 0, 00:14:10.517 "data_size": 63488 00:14:10.517 }, 00:14:10.517 { 00:14:10.517 "name": "BaseBdev3", 00:14:10.517 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:10.517 "is_configured": true, 00:14:10.517 "data_offset": 2048, 00:14:10.517 "data_size": 63488 00:14:10.517 }, 00:14:10.517 { 00:14:10.517 "name": "BaseBdev4", 00:14:10.517 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:10.517 "is_configured": true, 00:14:10.517 "data_offset": 2048, 00:14:10.517 "data_size": 63488 00:14:10.517 } 00:14:10.517 ] 00:14:10.517 }' 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.517 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.086 [2024-11-20 08:47:41.772218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.086 "name": "Existed_Raid", 00:14:11.086 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:11.086 "strip_size_kb": 0, 00:14:11.086 "state": "configuring", 00:14:11.086 "raid_level": "raid1", 00:14:11.086 "superblock": true, 00:14:11.086 "num_base_bdevs": 4, 00:14:11.086 "num_base_bdevs_discovered": 2, 00:14:11.086 "num_base_bdevs_operational": 4, 00:14:11.086 "base_bdevs_list": [ 00:14:11.086 { 00:14:11.086 "name": null, 00:14:11.086 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:11.086 "is_configured": false, 00:14:11.086 "data_offset": 0, 00:14:11.086 "data_size": 63488 00:14:11.086 }, 00:14:11.086 { 00:14:11.086 "name": null, 00:14:11.086 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:11.086 "is_configured": false, 00:14:11.086 "data_offset": 0, 00:14:11.086 "data_size": 63488 00:14:11.086 }, 00:14:11.086 { 00:14:11.086 "name": "BaseBdev3", 00:14:11.086 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:11.086 "is_configured": true, 00:14:11.086 "data_offset": 2048, 00:14:11.086 "data_size": 63488 00:14:11.086 }, 00:14:11.086 { 00:14:11.086 "name": "BaseBdev4", 00:14:11.086 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:11.086 "is_configured": true, 00:14:11.086 "data_offset": 2048, 00:14:11.086 "data_size": 63488 00:14:11.086 } 00:14:11.086 ] 00:14:11.086 }' 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.086 08:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.654 [2024-11-20 08:47:42.428424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.654 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.654 "name": "Existed_Raid", 00:14:11.654 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:11.654 "strip_size_kb": 0, 00:14:11.654 "state": "configuring", 00:14:11.654 "raid_level": "raid1", 00:14:11.654 "superblock": true, 00:14:11.654 "num_base_bdevs": 4, 00:14:11.654 "num_base_bdevs_discovered": 3, 00:14:11.654 "num_base_bdevs_operational": 4, 00:14:11.654 "base_bdevs_list": [ 00:14:11.654 { 00:14:11.654 "name": null, 00:14:11.654 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:11.654 "is_configured": false, 00:14:11.654 "data_offset": 0, 00:14:11.654 "data_size": 63488 00:14:11.654 }, 00:14:11.654 { 00:14:11.654 "name": "BaseBdev2", 00:14:11.654 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:11.654 "is_configured": true, 00:14:11.655 "data_offset": 2048, 00:14:11.655 "data_size": 63488 00:14:11.655 }, 00:14:11.655 { 00:14:11.655 "name": "BaseBdev3", 00:14:11.655 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:11.655 "is_configured": true, 00:14:11.655 "data_offset": 2048, 00:14:11.655 "data_size": 63488 00:14:11.655 }, 00:14:11.655 { 00:14:11.655 "name": "BaseBdev4", 00:14:11.655 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:11.655 "is_configured": true, 00:14:11.655 "data_offset": 2048, 00:14:11.655 "data_size": 63488 00:14:11.655 } 00:14:11.655 ] 00:14:11.655 }' 00:14:11.655 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.655 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.223 08:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 718a5a94-313b-400b-ae5f-5e362785544d 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.223 [2024-11-20 08:47:43.071256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:12.223 [2024-11-20 08:47:43.071565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:12.223 [2024-11-20 08:47:43.071590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.223 NewBaseBdev 00:14:12.223 [2024-11-20 08:47:43.071935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:12.223 [2024-11-20 08:47:43.072182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:12.223 [2024-11-20 08:47:43.072200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:12.223 [2024-11-20 08:47:43.072367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.223 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.223 [ 00:14:12.223 { 00:14:12.223 "name": "NewBaseBdev", 00:14:12.223 "aliases": [ 00:14:12.223 "718a5a94-313b-400b-ae5f-5e362785544d" 00:14:12.223 ], 00:14:12.223 "product_name": "Malloc disk", 00:14:12.223 "block_size": 512, 00:14:12.223 "num_blocks": 65536, 00:14:12.223 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:12.223 "assigned_rate_limits": { 00:14:12.223 "rw_ios_per_sec": 0, 00:14:12.223 "rw_mbytes_per_sec": 0, 00:14:12.223 "r_mbytes_per_sec": 0, 00:14:12.223 "w_mbytes_per_sec": 0 00:14:12.223 }, 00:14:12.223 "claimed": true, 00:14:12.223 "claim_type": "exclusive_write", 00:14:12.223 "zoned": false, 00:14:12.223 "supported_io_types": { 00:14:12.223 "read": true, 00:14:12.223 "write": true, 00:14:12.223 "unmap": true, 00:14:12.223 "flush": true, 00:14:12.223 "reset": true, 00:14:12.223 "nvme_admin": false, 00:14:12.223 "nvme_io": false, 00:14:12.223 "nvme_io_md": false, 00:14:12.223 "write_zeroes": true, 00:14:12.223 "zcopy": true, 00:14:12.223 "get_zone_info": false, 00:14:12.223 "zone_management": false, 00:14:12.223 "zone_append": false, 00:14:12.223 "compare": false, 00:14:12.223 "compare_and_write": false, 00:14:12.223 "abort": true, 00:14:12.223 "seek_hole": false, 00:14:12.223 "seek_data": false, 00:14:12.224 "copy": true, 00:14:12.224 "nvme_iov_md": false 00:14:12.224 }, 00:14:12.224 "memory_domains": [ 00:14:12.224 { 00:14:12.224 "dma_device_id": "system", 00:14:12.224 "dma_device_type": 1 00:14:12.224 }, 00:14:12.224 { 00:14:12.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.224 "dma_device_type": 2 00:14:12.224 } 00:14:12.224 ], 00:14:12.224 "driver_specific": {} 00:14:12.224 } 00:14:12.224 ] 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.224 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.483 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.483 "name": "Existed_Raid", 00:14:12.483 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:12.483 "strip_size_kb": 0, 00:14:12.483 "state": "online", 00:14:12.483 "raid_level": "raid1", 00:14:12.483 "superblock": true, 00:14:12.483 "num_base_bdevs": 4, 00:14:12.483 "num_base_bdevs_discovered": 4, 00:14:12.483 "num_base_bdevs_operational": 4, 00:14:12.483 "base_bdevs_list": [ 00:14:12.483 { 00:14:12.483 "name": "NewBaseBdev", 00:14:12.483 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:12.483 "is_configured": true, 00:14:12.483 "data_offset": 2048, 00:14:12.483 "data_size": 63488 00:14:12.483 }, 00:14:12.483 { 00:14:12.483 "name": "BaseBdev2", 00:14:12.483 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:12.483 "is_configured": true, 00:14:12.483 "data_offset": 2048, 00:14:12.483 "data_size": 63488 00:14:12.483 }, 00:14:12.483 { 00:14:12.483 "name": "BaseBdev3", 00:14:12.483 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:12.483 "is_configured": true, 00:14:12.483 "data_offset": 2048, 00:14:12.483 "data_size": 63488 00:14:12.483 }, 00:14:12.483 { 00:14:12.483 "name": "BaseBdev4", 00:14:12.483 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:12.483 "is_configured": true, 00:14:12.483 "data_offset": 2048, 00:14:12.483 "data_size": 63488 00:14:12.483 } 00:14:12.483 ] 00:14:12.483 }' 00:14:12.483 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.483 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.742 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:12.742 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:12.742 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.742 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.742 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.743 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.743 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:12.743 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.743 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.743 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.743 [2024-11-20 08:47:43.619893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.743 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:13.002 "name": "Existed_Raid", 00:14:13.002 "aliases": [ 00:14:13.002 "bee576fd-1e95-4b46-9998-58e93c394715" 00:14:13.002 ], 00:14:13.002 "product_name": "Raid Volume", 00:14:13.002 "block_size": 512, 00:14:13.002 "num_blocks": 63488, 00:14:13.002 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:13.002 "assigned_rate_limits": { 00:14:13.002 "rw_ios_per_sec": 0, 00:14:13.002 "rw_mbytes_per_sec": 0, 00:14:13.002 "r_mbytes_per_sec": 0, 00:14:13.002 "w_mbytes_per_sec": 0 00:14:13.002 }, 00:14:13.002 "claimed": false, 00:14:13.002 "zoned": false, 00:14:13.002 "supported_io_types": { 00:14:13.002 "read": true, 00:14:13.002 "write": true, 00:14:13.002 "unmap": false, 00:14:13.002 "flush": false, 00:14:13.002 "reset": true, 00:14:13.002 "nvme_admin": false, 00:14:13.002 "nvme_io": false, 00:14:13.002 "nvme_io_md": false, 00:14:13.002 "write_zeroes": true, 00:14:13.002 "zcopy": false, 00:14:13.002 "get_zone_info": false, 00:14:13.002 "zone_management": false, 00:14:13.002 "zone_append": false, 00:14:13.002 "compare": false, 00:14:13.002 "compare_and_write": false, 00:14:13.002 "abort": false, 00:14:13.002 "seek_hole": false, 00:14:13.002 "seek_data": false, 00:14:13.002 "copy": false, 00:14:13.002 "nvme_iov_md": false 00:14:13.002 }, 00:14:13.002 "memory_domains": [ 00:14:13.002 { 00:14:13.002 "dma_device_id": "system", 00:14:13.002 "dma_device_type": 1 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.002 "dma_device_type": 2 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "system", 00:14:13.002 "dma_device_type": 1 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.002 "dma_device_type": 2 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "system", 00:14:13.002 "dma_device_type": 1 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.002 "dma_device_type": 2 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "system", 00:14:13.002 "dma_device_type": 1 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.002 "dma_device_type": 2 00:14:13.002 } 00:14:13.002 ], 00:14:13.002 "driver_specific": { 00:14:13.002 "raid": { 00:14:13.002 "uuid": "bee576fd-1e95-4b46-9998-58e93c394715", 00:14:13.002 "strip_size_kb": 0, 00:14:13.002 "state": "online", 00:14:13.002 "raid_level": "raid1", 00:14:13.002 "superblock": true, 00:14:13.002 "num_base_bdevs": 4, 00:14:13.002 "num_base_bdevs_discovered": 4, 00:14:13.002 "num_base_bdevs_operational": 4, 00:14:13.002 "base_bdevs_list": [ 00:14:13.002 { 00:14:13.002 "name": "NewBaseBdev", 00:14:13.002 "uuid": "718a5a94-313b-400b-ae5f-5e362785544d", 00:14:13.002 "is_configured": true, 00:14:13.002 "data_offset": 2048, 00:14:13.002 "data_size": 63488 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "name": "BaseBdev2", 00:14:13.002 "uuid": "132b14cd-b608-4f25-ba09-b6685401f38a", 00:14:13.002 "is_configured": true, 00:14:13.002 "data_offset": 2048, 00:14:13.002 "data_size": 63488 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "name": "BaseBdev3", 00:14:13.002 "uuid": "88db2daa-db02-418f-98de-1957c66f7426", 00:14:13.002 "is_configured": true, 00:14:13.002 "data_offset": 2048, 00:14:13.002 "data_size": 63488 00:14:13.002 }, 00:14:13.002 { 00:14:13.002 "name": "BaseBdev4", 00:14:13.002 "uuid": "18f119e1-7360-4d0d-9ad3-be141c4db7b3", 00:14:13.002 "is_configured": true, 00:14:13.002 "data_offset": 2048, 00:14:13.002 "data_size": 63488 00:14:13.002 } 00:14:13.002 ] 00:14:13.002 } 00:14:13.002 } 00:14:13.002 }' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:13.002 BaseBdev2 00:14:13.002 BaseBdev3 00:14:13.002 BaseBdev4' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.002 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.003 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.261 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.261 [2024-11-20 08:47:43.979502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.261 [2024-11-20 08:47:43.979541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.262 [2024-11-20 08:47:43.979644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.262 [2024-11-20 08:47:43.980028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.262 [2024-11-20 08:47:43.980051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74004 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74004 ']' 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74004 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.262 08:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74004 00:14:13.262 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.262 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.262 killing process with pid 74004 00:14:13.262 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74004' 00:14:13.262 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74004 00:14:13.262 [2024-11-20 08:47:44.019306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.262 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74004 00:14:13.521 [2024-11-20 08:47:44.363524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.895 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:14.895 00:14:14.895 real 0m12.858s 00:14:14.895 user 0m21.471s 00:14:14.895 sys 0m1.680s 00:14:14.895 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.895 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.895 ************************************ 00:14:14.895 END TEST raid_state_function_test_sb 00:14:14.895 ************************************ 00:14:14.895 08:47:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:14.895 08:47:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:14.895 08:47:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.895 08:47:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.895 ************************************ 00:14:14.895 START TEST raid_superblock_test 00:14:14.895 ************************************ 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:14.895 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74687 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74687 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74687 ']' 00:14:14.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.896 08:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.896 [2024-11-20 08:47:45.552517] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:14.896 [2024-11-20 08:47:45.552756] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74687 ] 00:14:14.896 [2024-11-20 08:47:45.750117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.153 [2024-11-20 08:47:45.929868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.410 [2024-11-20 08:47:46.142999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.410 [2024-11-20 08:47:46.143072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.977 malloc1 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.977 [2024-11-20 08:47:46.689571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:15.977 [2024-11-20 08:47:46.689679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.977 [2024-11-20 08:47:46.689712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:15.977 [2024-11-20 08:47:46.689729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.977 [2024-11-20 08:47:46.692542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.977 [2024-11-20 08:47:46.692590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:15.977 pt1 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.977 malloc2 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.977 [2024-11-20 08:47:46.740588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.977 [2024-11-20 08:47:46.740671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.977 [2024-11-20 08:47:46.740709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:15.977 [2024-11-20 08:47:46.740724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.977 [2024-11-20 08:47:46.743476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.977 [2024-11-20 08:47:46.743521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.977 pt2 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.977 malloc3 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:15.977 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.978 [2024-11-20 08:47:46.806924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:15.978 [2024-11-20 08:47:46.806989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.978 [2024-11-20 08:47:46.807023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:15.978 [2024-11-20 08:47:46.807038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.978 [2024-11-20 08:47:46.809820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.978 [2024-11-20 08:47:46.809991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:15.978 pt3 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.978 malloc4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.978 [2024-11-20 08:47:46.855305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:15.978 [2024-11-20 08:47:46.855394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.978 [2024-11-20 08:47:46.855424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.978 [2024-11-20 08:47:46.855439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.978 [2024-11-20 08:47:46.858273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.978 [2024-11-20 08:47:46.858318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:15.978 pt4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.978 [2024-11-20 08:47:46.863313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:15.978 [2024-11-20 08:47:46.865819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.978 [2024-11-20 08:47:46.865915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:15.978 [2024-11-20 08:47:46.865988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:15.978 [2024-11-20 08:47:46.866250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:15.978 [2024-11-20 08:47:46.866275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.978 [2024-11-20 08:47:46.866648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:15.978 [2024-11-20 08:47:46.866874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:15.978 [2024-11-20 08:47:46.866899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:15.978 [2024-11-20 08:47:46.867090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.978 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.237 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.237 "name": "raid_bdev1", 00:14:16.237 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:16.237 "strip_size_kb": 0, 00:14:16.237 "state": "online", 00:14:16.237 "raid_level": "raid1", 00:14:16.237 "superblock": true, 00:14:16.237 "num_base_bdevs": 4, 00:14:16.237 "num_base_bdevs_discovered": 4, 00:14:16.237 "num_base_bdevs_operational": 4, 00:14:16.237 "base_bdevs_list": [ 00:14:16.237 { 00:14:16.237 "name": "pt1", 00:14:16.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.237 "is_configured": true, 00:14:16.237 "data_offset": 2048, 00:14:16.237 "data_size": 63488 00:14:16.237 }, 00:14:16.237 { 00:14:16.237 "name": "pt2", 00:14:16.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.237 "is_configured": true, 00:14:16.237 "data_offset": 2048, 00:14:16.237 "data_size": 63488 00:14:16.237 }, 00:14:16.237 { 00:14:16.237 "name": "pt3", 00:14:16.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.237 "is_configured": true, 00:14:16.237 "data_offset": 2048, 00:14:16.237 "data_size": 63488 00:14:16.237 }, 00:14:16.237 { 00:14:16.237 "name": "pt4", 00:14:16.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.237 "is_configured": true, 00:14:16.237 "data_offset": 2048, 00:14:16.237 "data_size": 63488 00:14:16.237 } 00:14:16.237 ] 00:14:16.237 }' 00:14:16.237 08:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.237 08:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.495 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.495 [2024-11-20 08:47:47.391861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.754 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.754 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.754 "name": "raid_bdev1", 00:14:16.754 "aliases": [ 00:14:16.754 "474bb4b6-f313-43c3-8475-2861cb743ccf" 00:14:16.754 ], 00:14:16.754 "product_name": "Raid Volume", 00:14:16.754 "block_size": 512, 00:14:16.754 "num_blocks": 63488, 00:14:16.754 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:16.754 "assigned_rate_limits": { 00:14:16.754 "rw_ios_per_sec": 0, 00:14:16.754 "rw_mbytes_per_sec": 0, 00:14:16.754 "r_mbytes_per_sec": 0, 00:14:16.754 "w_mbytes_per_sec": 0 00:14:16.754 }, 00:14:16.754 "claimed": false, 00:14:16.754 "zoned": false, 00:14:16.754 "supported_io_types": { 00:14:16.754 "read": true, 00:14:16.754 "write": true, 00:14:16.754 "unmap": false, 00:14:16.754 "flush": false, 00:14:16.754 "reset": true, 00:14:16.754 "nvme_admin": false, 00:14:16.754 "nvme_io": false, 00:14:16.754 "nvme_io_md": false, 00:14:16.754 "write_zeroes": true, 00:14:16.754 "zcopy": false, 00:14:16.754 "get_zone_info": false, 00:14:16.754 "zone_management": false, 00:14:16.754 "zone_append": false, 00:14:16.754 "compare": false, 00:14:16.754 "compare_and_write": false, 00:14:16.754 "abort": false, 00:14:16.754 "seek_hole": false, 00:14:16.754 "seek_data": false, 00:14:16.755 "copy": false, 00:14:16.755 "nvme_iov_md": false 00:14:16.755 }, 00:14:16.755 "memory_domains": [ 00:14:16.755 { 00:14:16.755 "dma_device_id": "system", 00:14:16.755 "dma_device_type": 1 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.755 "dma_device_type": 2 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "system", 00:14:16.755 "dma_device_type": 1 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.755 "dma_device_type": 2 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "system", 00:14:16.755 "dma_device_type": 1 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.755 "dma_device_type": 2 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "system", 00:14:16.755 "dma_device_type": 1 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.755 "dma_device_type": 2 00:14:16.755 } 00:14:16.755 ], 00:14:16.755 "driver_specific": { 00:14:16.755 "raid": { 00:14:16.755 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:16.755 "strip_size_kb": 0, 00:14:16.755 "state": "online", 00:14:16.755 "raid_level": "raid1", 00:14:16.755 "superblock": true, 00:14:16.755 "num_base_bdevs": 4, 00:14:16.755 "num_base_bdevs_discovered": 4, 00:14:16.755 "num_base_bdevs_operational": 4, 00:14:16.755 "base_bdevs_list": [ 00:14:16.755 { 00:14:16.755 "name": "pt1", 00:14:16.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.755 "is_configured": true, 00:14:16.755 "data_offset": 2048, 00:14:16.755 "data_size": 63488 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "name": "pt2", 00:14:16.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.755 "is_configured": true, 00:14:16.755 "data_offset": 2048, 00:14:16.755 "data_size": 63488 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "name": "pt3", 00:14:16.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.755 "is_configured": true, 00:14:16.755 "data_offset": 2048, 00:14:16.755 "data_size": 63488 00:14:16.755 }, 00:14:16.755 { 00:14:16.755 "name": "pt4", 00:14:16.755 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:16.755 "is_configured": true, 00:14:16.755 "data_offset": 2048, 00:14:16.755 "data_size": 63488 00:14:16.755 } 00:14:16.755 ] 00:14:16.755 } 00:14:16.755 } 00:14:16.755 }' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:16.755 pt2 00:14:16.755 pt3 00:14:16.755 pt4' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.755 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:17.014 [2024-11-20 08:47:47.767876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=474bb4b6-f313-43c3-8475-2861cb743ccf 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 474bb4b6-f313-43c3-8475-2861cb743ccf ']' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 [2024-11-20 08:47:47.819519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.014 [2024-11-20 08:47:47.819561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.014 [2024-11-20 08:47:47.819661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.014 [2024-11-20 08:47:47.819771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.014 [2024-11-20 08:47:47.819793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:17.014 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.274 [2024-11-20 08:47:47.971601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:17.274 [2024-11-20 08:47:47.974036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:17.274 [2024-11-20 08:47:47.974102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:17.274 [2024-11-20 08:47:47.974319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:17.274 [2024-11-20 08:47:47.974454] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:17.274 [2024-11-20 08:47:47.974761] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:17.274 [2024-11-20 08:47:47.974928] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:17.274 [2024-11-20 08:47:47.975204] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:17.274 [2024-11-20 08:47:47.975352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.274 [2024-11-20 08:47:47.975474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:17.274 request: 00:14:17.274 { 00:14:17.274 "name": "raid_bdev1", 00:14:17.274 "raid_level": "raid1", 00:14:17.274 "base_bdevs": [ 00:14:17.274 "malloc1", 00:14:17.274 "malloc2", 00:14:17.274 "malloc3", 00:14:17.274 "malloc4" 00:14:17.274 ], 00:14:17.274 "superblock": false, 00:14:17.274 "method": "bdev_raid_create", 00:14:17.274 "req_id": 1 00:14:17.274 } 00:14:17.274 Got JSON-RPC error response 00:14:17.274 response: 00:14:17.274 { 00:14:17.274 "code": -17, 00:14:17.274 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:17.274 } 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:17.274 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.275 08:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.275 [2024-11-20 08:47:48.035798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:17.275 [2024-11-20 08:47:48.035862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.275 [2024-11-20 08:47:48.035887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:17.275 [2024-11-20 08:47:48.035904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.275 [2024-11-20 08:47:48.038692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.275 [2024-11-20 08:47:48.038744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:17.275 [2024-11-20 08:47:48.038852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:17.275 [2024-11-20 08:47:48.038933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:17.275 pt1 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.275 "name": "raid_bdev1", 00:14:17.275 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:17.275 "strip_size_kb": 0, 00:14:17.275 "state": "configuring", 00:14:17.275 "raid_level": "raid1", 00:14:17.275 "superblock": true, 00:14:17.275 "num_base_bdevs": 4, 00:14:17.275 "num_base_bdevs_discovered": 1, 00:14:17.275 "num_base_bdevs_operational": 4, 00:14:17.275 "base_bdevs_list": [ 00:14:17.275 { 00:14:17.275 "name": "pt1", 00:14:17.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:17.275 "is_configured": true, 00:14:17.275 "data_offset": 2048, 00:14:17.275 "data_size": 63488 00:14:17.275 }, 00:14:17.275 { 00:14:17.275 "name": null, 00:14:17.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.275 "is_configured": false, 00:14:17.275 "data_offset": 2048, 00:14:17.275 "data_size": 63488 00:14:17.275 }, 00:14:17.275 { 00:14:17.275 "name": null, 00:14:17.275 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.275 "is_configured": false, 00:14:17.275 "data_offset": 2048, 00:14:17.275 "data_size": 63488 00:14:17.275 }, 00:14:17.275 { 00:14:17.275 "name": null, 00:14:17.275 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.275 "is_configured": false, 00:14:17.275 "data_offset": 2048, 00:14:17.275 "data_size": 63488 00:14:17.275 } 00:14:17.275 ] 00:14:17.275 }' 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.275 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.843 [2024-11-20 08:47:48.572021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.843 [2024-11-20 08:47:48.572104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.843 [2024-11-20 08:47:48.572133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:17.843 [2024-11-20 08:47:48.572188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.843 [2024-11-20 08:47:48.572789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.843 [2024-11-20 08:47:48.572836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.843 [2024-11-20 08:47:48.572939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:17.843 [2024-11-20 08:47:48.572988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.843 pt2 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.843 [2024-11-20 08:47:48.579983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.843 "name": "raid_bdev1", 00:14:17.843 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:17.843 "strip_size_kb": 0, 00:14:17.843 "state": "configuring", 00:14:17.843 "raid_level": "raid1", 00:14:17.843 "superblock": true, 00:14:17.843 "num_base_bdevs": 4, 00:14:17.843 "num_base_bdevs_discovered": 1, 00:14:17.843 "num_base_bdevs_operational": 4, 00:14:17.843 "base_bdevs_list": [ 00:14:17.843 { 00:14:17.843 "name": "pt1", 00:14:17.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:17.843 "is_configured": true, 00:14:17.843 "data_offset": 2048, 00:14:17.843 "data_size": 63488 00:14:17.843 }, 00:14:17.843 { 00:14:17.843 "name": null, 00:14:17.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.843 "is_configured": false, 00:14:17.843 "data_offset": 0, 00:14:17.843 "data_size": 63488 00:14:17.843 }, 00:14:17.843 { 00:14:17.843 "name": null, 00:14:17.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.843 "is_configured": false, 00:14:17.843 "data_offset": 2048, 00:14:17.843 "data_size": 63488 00:14:17.843 }, 00:14:17.843 { 00:14:17.843 "name": null, 00:14:17.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:17.843 "is_configured": false, 00:14:17.843 "data_offset": 2048, 00:14:17.843 "data_size": 63488 00:14:17.843 } 00:14:17.843 ] 00:14:17.843 }' 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.843 08:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.411 [2024-11-20 08:47:49.112137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:18.411 [2024-11-20 08:47:49.112229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.411 [2024-11-20 08:47:49.112268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:18.411 [2024-11-20 08:47:49.112286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.411 [2024-11-20 08:47:49.112859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.411 [2024-11-20 08:47:49.112907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:18.411 [2024-11-20 08:47:49.113014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:18.411 [2024-11-20 08:47:49.113051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:18.411 pt2 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.411 [2024-11-20 08:47:49.120104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:18.411 [2024-11-20 08:47:49.120187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.411 [2024-11-20 08:47:49.120215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:18.411 [2024-11-20 08:47:49.120228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.411 [2024-11-20 08:47:49.120707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.411 [2024-11-20 08:47:49.120747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:18.411 [2024-11-20 08:47:49.120839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:18.411 [2024-11-20 08:47:49.120867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:18.411 pt3 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.411 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.411 [2024-11-20 08:47:49.128065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:18.411 [2024-11-20 08:47:49.128123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.411 [2024-11-20 08:47:49.128165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:18.411 [2024-11-20 08:47:49.128182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.411 [2024-11-20 08:47:49.128625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.412 [2024-11-20 08:47:49.128677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:18.412 [2024-11-20 08:47:49.128756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:18.412 [2024-11-20 08:47:49.128784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:18.412 [2024-11-20 08:47:49.128961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:18.412 [2024-11-20 08:47:49.128984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:18.412 [2024-11-20 08:47:49.129307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:18.412 [2024-11-20 08:47:49.129510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:18.412 [2024-11-20 08:47:49.129537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:18.412 [2024-11-20 08:47:49.129703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.412 pt4 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.412 "name": "raid_bdev1", 00:14:18.412 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:18.412 "strip_size_kb": 0, 00:14:18.412 "state": "online", 00:14:18.412 "raid_level": "raid1", 00:14:18.412 "superblock": true, 00:14:18.412 "num_base_bdevs": 4, 00:14:18.412 "num_base_bdevs_discovered": 4, 00:14:18.412 "num_base_bdevs_operational": 4, 00:14:18.412 "base_bdevs_list": [ 00:14:18.412 { 00:14:18.412 "name": "pt1", 00:14:18.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.412 "is_configured": true, 00:14:18.412 "data_offset": 2048, 00:14:18.412 "data_size": 63488 00:14:18.412 }, 00:14:18.412 { 00:14:18.412 "name": "pt2", 00:14:18.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.412 "is_configured": true, 00:14:18.412 "data_offset": 2048, 00:14:18.412 "data_size": 63488 00:14:18.412 }, 00:14:18.412 { 00:14:18.412 "name": "pt3", 00:14:18.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.412 "is_configured": true, 00:14:18.412 "data_offset": 2048, 00:14:18.412 "data_size": 63488 00:14:18.412 }, 00:14:18.412 { 00:14:18.412 "name": "pt4", 00:14:18.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.412 "is_configured": true, 00:14:18.412 "data_offset": 2048, 00:14:18.412 "data_size": 63488 00:14:18.412 } 00:14:18.412 ] 00:14:18.412 }' 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.412 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.978 [2024-11-20 08:47:49.672696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.978 "name": "raid_bdev1", 00:14:18.978 "aliases": [ 00:14:18.978 "474bb4b6-f313-43c3-8475-2861cb743ccf" 00:14:18.978 ], 00:14:18.978 "product_name": "Raid Volume", 00:14:18.978 "block_size": 512, 00:14:18.978 "num_blocks": 63488, 00:14:18.978 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:18.978 "assigned_rate_limits": { 00:14:18.978 "rw_ios_per_sec": 0, 00:14:18.978 "rw_mbytes_per_sec": 0, 00:14:18.978 "r_mbytes_per_sec": 0, 00:14:18.978 "w_mbytes_per_sec": 0 00:14:18.978 }, 00:14:18.978 "claimed": false, 00:14:18.978 "zoned": false, 00:14:18.978 "supported_io_types": { 00:14:18.978 "read": true, 00:14:18.978 "write": true, 00:14:18.978 "unmap": false, 00:14:18.978 "flush": false, 00:14:18.978 "reset": true, 00:14:18.978 "nvme_admin": false, 00:14:18.978 "nvme_io": false, 00:14:18.978 "nvme_io_md": false, 00:14:18.978 "write_zeroes": true, 00:14:18.978 "zcopy": false, 00:14:18.978 "get_zone_info": false, 00:14:18.978 "zone_management": false, 00:14:18.978 "zone_append": false, 00:14:18.978 "compare": false, 00:14:18.978 "compare_and_write": false, 00:14:18.978 "abort": false, 00:14:18.978 "seek_hole": false, 00:14:18.978 "seek_data": false, 00:14:18.978 "copy": false, 00:14:18.978 "nvme_iov_md": false 00:14:18.978 }, 00:14:18.978 "memory_domains": [ 00:14:18.978 { 00:14:18.978 "dma_device_id": "system", 00:14:18.978 "dma_device_type": 1 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.978 "dma_device_type": 2 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "system", 00:14:18.978 "dma_device_type": 1 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.978 "dma_device_type": 2 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "system", 00:14:18.978 "dma_device_type": 1 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.978 "dma_device_type": 2 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "system", 00:14:18.978 "dma_device_type": 1 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.978 "dma_device_type": 2 00:14:18.978 } 00:14:18.978 ], 00:14:18.978 "driver_specific": { 00:14:18.978 "raid": { 00:14:18.978 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:18.978 "strip_size_kb": 0, 00:14:18.978 "state": "online", 00:14:18.978 "raid_level": "raid1", 00:14:18.978 "superblock": true, 00:14:18.978 "num_base_bdevs": 4, 00:14:18.978 "num_base_bdevs_discovered": 4, 00:14:18.978 "num_base_bdevs_operational": 4, 00:14:18.978 "base_bdevs_list": [ 00:14:18.978 { 00:14:18.978 "name": "pt1", 00:14:18.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.978 "is_configured": true, 00:14:18.978 "data_offset": 2048, 00:14:18.978 "data_size": 63488 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "name": "pt2", 00:14:18.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.978 "is_configured": true, 00:14:18.978 "data_offset": 2048, 00:14:18.978 "data_size": 63488 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "name": "pt3", 00:14:18.978 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.978 "is_configured": true, 00:14:18.978 "data_offset": 2048, 00:14:18.978 "data_size": 63488 00:14:18.978 }, 00:14:18.978 { 00:14:18.978 "name": "pt4", 00:14:18.978 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:18.978 "is_configured": true, 00:14:18.978 "data_offset": 2048, 00:14:18.978 "data_size": 63488 00:14:18.978 } 00:14:18.978 ] 00:14:18.978 } 00:14:18.978 } 00:14:18.978 }' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:18.978 pt2 00:14:18.978 pt3 00:14:18.978 pt4' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.978 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.237 08:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.237 [2024-11-20 08:47:50.052719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 474bb4b6-f313-43c3-8475-2861cb743ccf '!=' 474bb4b6-f313-43c3-8475-2861cb743ccf ']' 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.237 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.238 [2024-11-20 08:47:50.104412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.238 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.497 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.497 "name": "raid_bdev1", 00:14:19.497 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:19.497 "strip_size_kb": 0, 00:14:19.497 "state": "online", 00:14:19.497 "raid_level": "raid1", 00:14:19.497 "superblock": true, 00:14:19.497 "num_base_bdevs": 4, 00:14:19.497 "num_base_bdevs_discovered": 3, 00:14:19.497 "num_base_bdevs_operational": 3, 00:14:19.497 "base_bdevs_list": [ 00:14:19.497 { 00:14:19.497 "name": null, 00:14:19.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.497 "is_configured": false, 00:14:19.497 "data_offset": 0, 00:14:19.497 "data_size": 63488 00:14:19.497 }, 00:14:19.497 { 00:14:19.497 "name": "pt2", 00:14:19.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.497 "is_configured": true, 00:14:19.497 "data_offset": 2048, 00:14:19.497 "data_size": 63488 00:14:19.497 }, 00:14:19.497 { 00:14:19.497 "name": "pt3", 00:14:19.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.497 "is_configured": true, 00:14:19.497 "data_offset": 2048, 00:14:19.497 "data_size": 63488 00:14:19.497 }, 00:14:19.497 { 00:14:19.497 "name": "pt4", 00:14:19.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:19.497 "is_configured": true, 00:14:19.497 "data_offset": 2048, 00:14:19.497 "data_size": 63488 00:14:19.497 } 00:14:19.497 ] 00:14:19.497 }' 00:14:19.497 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.497 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.756 [2024-11-20 08:47:50.612501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.756 [2024-11-20 08:47:50.612669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.756 [2024-11-20 08:47:50.612781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.756 [2024-11-20 08:47:50.612884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.756 [2024-11-20 08:47:50.612901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:19.756 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.757 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.016 [2024-11-20 08:47:50.688520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.016 [2024-11-20 08:47:50.688732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.016 [2024-11-20 08:47:50.688780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:20.016 [2024-11-20 08:47:50.688797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.016 [2024-11-20 08:47:50.691646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.016 [2024-11-20 08:47:50.691690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.016 [2024-11-20 08:47:50.691820] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:20.016 [2024-11-20 08:47:50.691874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:20.016 pt2 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.016 "name": "raid_bdev1", 00:14:20.016 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:20.016 "strip_size_kb": 0, 00:14:20.016 "state": "configuring", 00:14:20.016 "raid_level": "raid1", 00:14:20.016 "superblock": true, 00:14:20.016 "num_base_bdevs": 4, 00:14:20.016 "num_base_bdevs_discovered": 1, 00:14:20.016 "num_base_bdevs_operational": 3, 00:14:20.016 "base_bdevs_list": [ 00:14:20.016 { 00:14:20.016 "name": null, 00:14:20.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.016 "is_configured": false, 00:14:20.016 "data_offset": 2048, 00:14:20.016 "data_size": 63488 00:14:20.016 }, 00:14:20.016 { 00:14:20.016 "name": "pt2", 00:14:20.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.016 "is_configured": true, 00:14:20.016 "data_offset": 2048, 00:14:20.016 "data_size": 63488 00:14:20.016 }, 00:14:20.016 { 00:14:20.016 "name": null, 00:14:20.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.016 "is_configured": false, 00:14:20.016 "data_offset": 2048, 00:14:20.016 "data_size": 63488 00:14:20.016 }, 00:14:20.016 { 00:14:20.016 "name": null, 00:14:20.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.016 "is_configured": false, 00:14:20.016 "data_offset": 2048, 00:14:20.016 "data_size": 63488 00:14:20.016 } 00:14:20.016 ] 00:14:20.016 }' 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.016 08:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.584 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:20.584 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:20.584 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:20.584 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.584 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.584 [2024-11-20 08:47:51.204662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:20.585 [2024-11-20 08:47:51.204915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.585 [2024-11-20 08:47:51.204959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:20.585 [2024-11-20 08:47:51.204976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.585 [2024-11-20 08:47:51.205579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.585 [2024-11-20 08:47:51.205604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:20.585 [2024-11-20 08:47:51.205710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:20.585 [2024-11-20 08:47:51.205742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:20.585 pt3 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.585 "name": "raid_bdev1", 00:14:20.585 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:20.585 "strip_size_kb": 0, 00:14:20.585 "state": "configuring", 00:14:20.585 "raid_level": "raid1", 00:14:20.585 "superblock": true, 00:14:20.585 "num_base_bdevs": 4, 00:14:20.585 "num_base_bdevs_discovered": 2, 00:14:20.585 "num_base_bdevs_operational": 3, 00:14:20.585 "base_bdevs_list": [ 00:14:20.585 { 00:14:20.585 "name": null, 00:14:20.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.585 "is_configured": false, 00:14:20.585 "data_offset": 2048, 00:14:20.585 "data_size": 63488 00:14:20.585 }, 00:14:20.585 { 00:14:20.585 "name": "pt2", 00:14:20.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.585 "is_configured": true, 00:14:20.585 "data_offset": 2048, 00:14:20.585 "data_size": 63488 00:14:20.585 }, 00:14:20.585 { 00:14:20.585 "name": "pt3", 00:14:20.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.585 "is_configured": true, 00:14:20.585 "data_offset": 2048, 00:14:20.585 "data_size": 63488 00:14:20.585 }, 00:14:20.585 { 00:14:20.585 "name": null, 00:14:20.585 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:20.585 "is_configured": false, 00:14:20.585 "data_offset": 2048, 00:14:20.585 "data_size": 63488 00:14:20.585 } 00:14:20.585 ] 00:14:20.585 }' 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.585 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.844 [2024-11-20 08:47:51.724812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:20.844 [2024-11-20 08:47:51.725026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.844 [2024-11-20 08:47:51.725071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:20.844 [2024-11-20 08:47:51.725088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.844 [2024-11-20 08:47:51.725669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.844 [2024-11-20 08:47:51.725695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:20.844 [2024-11-20 08:47:51.725801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:20.844 [2024-11-20 08:47:51.725840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:20.844 [2024-11-20 08:47:51.726021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:20.844 [2024-11-20 08:47:51.726037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.844 [2024-11-20 08:47:51.726364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:20.844 [2024-11-20 08:47:51.726560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:20.844 [2024-11-20 08:47:51.726585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:20.844 [2024-11-20 08:47:51.726750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.844 pt4 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.844 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.845 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.845 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.845 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.845 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.845 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.845 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.103 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.103 "name": "raid_bdev1", 00:14:21.103 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:21.103 "strip_size_kb": 0, 00:14:21.103 "state": "online", 00:14:21.103 "raid_level": "raid1", 00:14:21.103 "superblock": true, 00:14:21.103 "num_base_bdevs": 4, 00:14:21.103 "num_base_bdevs_discovered": 3, 00:14:21.104 "num_base_bdevs_operational": 3, 00:14:21.104 "base_bdevs_list": [ 00:14:21.104 { 00:14:21.104 "name": null, 00:14:21.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.104 "is_configured": false, 00:14:21.104 "data_offset": 2048, 00:14:21.104 "data_size": 63488 00:14:21.104 }, 00:14:21.104 { 00:14:21.104 "name": "pt2", 00:14:21.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.104 "is_configured": true, 00:14:21.104 "data_offset": 2048, 00:14:21.104 "data_size": 63488 00:14:21.104 }, 00:14:21.104 { 00:14:21.104 "name": "pt3", 00:14:21.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.104 "is_configured": true, 00:14:21.104 "data_offset": 2048, 00:14:21.104 "data_size": 63488 00:14:21.104 }, 00:14:21.104 { 00:14:21.104 "name": "pt4", 00:14:21.104 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.104 "is_configured": true, 00:14:21.104 "data_offset": 2048, 00:14:21.104 "data_size": 63488 00:14:21.104 } 00:14:21.104 ] 00:14:21.104 }' 00:14:21.104 08:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.104 08:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.363 [2024-11-20 08:47:52.260892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.363 [2024-11-20 08:47:52.260926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.363 [2024-11-20 08:47:52.261019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.363 [2024-11-20 08:47:52.261118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.363 [2024-11-20 08:47:52.261139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.363 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.621 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 [2024-11-20 08:47:52.328911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.622 [2024-11-20 08:47:52.329002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.622 [2024-11-20 08:47:52.329029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:21.622 [2024-11-20 08:47:52.329049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.622 [2024-11-20 08:47:52.331933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.622 [2024-11-20 08:47:52.332126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.622 [2024-11-20 08:47:52.332263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:21.622 [2024-11-20 08:47:52.332329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.622 [2024-11-20 08:47:52.332491] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:21.622 [2024-11-20 08:47:52.332514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.622 [2024-11-20 08:47:52.332536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:21.622 [2024-11-20 08:47:52.332617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.622 [2024-11-20 08:47:52.332763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:21.622 pt1 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.622 "name": "raid_bdev1", 00:14:21.622 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:21.622 "strip_size_kb": 0, 00:14:21.622 "state": "configuring", 00:14:21.622 "raid_level": "raid1", 00:14:21.622 "superblock": true, 00:14:21.622 "num_base_bdevs": 4, 00:14:21.622 "num_base_bdevs_discovered": 2, 00:14:21.622 "num_base_bdevs_operational": 3, 00:14:21.622 "base_bdevs_list": [ 00:14:21.622 { 00:14:21.622 "name": null, 00:14:21.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.622 "is_configured": false, 00:14:21.622 "data_offset": 2048, 00:14:21.622 "data_size": 63488 00:14:21.622 }, 00:14:21.622 { 00:14:21.622 "name": "pt2", 00:14:21.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.622 "is_configured": true, 00:14:21.622 "data_offset": 2048, 00:14:21.622 "data_size": 63488 00:14:21.622 }, 00:14:21.622 { 00:14:21.622 "name": "pt3", 00:14:21.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.622 "is_configured": true, 00:14:21.622 "data_offset": 2048, 00:14:21.622 "data_size": 63488 00:14:21.622 }, 00:14:21.622 { 00:14:21.622 "name": null, 00:14:21.622 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:21.622 "is_configured": false, 00:14:21.622 "data_offset": 2048, 00:14:21.622 "data_size": 63488 00:14:21.622 } 00:14:21.622 ] 00:14:21.622 }' 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.622 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.189 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 [2024-11-20 08:47:52.921087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:22.190 [2024-11-20 08:47:52.921328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.190 [2024-11-20 08:47:52.921375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:22.190 [2024-11-20 08:47:52.921391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.190 [2024-11-20 08:47:52.921947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.190 [2024-11-20 08:47:52.921972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:22.190 [2024-11-20 08:47:52.922077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:22.190 [2024-11-20 08:47:52.922116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:22.190 [2024-11-20 08:47:52.922314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:22.190 [2024-11-20 08:47:52.922332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.190 [2024-11-20 08:47:52.922642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:22.190 [2024-11-20 08:47:52.922827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:22.190 [2024-11-20 08:47:52.922847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:22.190 [2024-11-20 08:47:52.923019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.190 pt4 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.190 "name": "raid_bdev1", 00:14:22.190 "uuid": "474bb4b6-f313-43c3-8475-2861cb743ccf", 00:14:22.190 "strip_size_kb": 0, 00:14:22.190 "state": "online", 00:14:22.190 "raid_level": "raid1", 00:14:22.190 "superblock": true, 00:14:22.190 "num_base_bdevs": 4, 00:14:22.190 "num_base_bdevs_discovered": 3, 00:14:22.190 "num_base_bdevs_operational": 3, 00:14:22.190 "base_bdevs_list": [ 00:14:22.190 { 00:14:22.190 "name": null, 00:14:22.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.190 "is_configured": false, 00:14:22.190 "data_offset": 2048, 00:14:22.190 "data_size": 63488 00:14:22.190 }, 00:14:22.190 { 00:14:22.190 "name": "pt2", 00:14:22.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.190 "is_configured": true, 00:14:22.190 "data_offset": 2048, 00:14:22.190 "data_size": 63488 00:14:22.190 }, 00:14:22.190 { 00:14:22.190 "name": "pt3", 00:14:22.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.190 "is_configured": true, 00:14:22.190 "data_offset": 2048, 00:14:22.190 "data_size": 63488 00:14:22.190 }, 00:14:22.190 { 00:14:22.190 "name": "pt4", 00:14:22.190 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:22.190 "is_configured": true, 00:14:22.190 "data_offset": 2048, 00:14:22.190 "data_size": 63488 00:14:22.190 } 00:14:22.190 ] 00:14:22.190 }' 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.190 08:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:22.758 [2024-11-20 08:47:53.529620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 474bb4b6-f313-43c3-8475-2861cb743ccf '!=' 474bb4b6-f313-43c3-8475-2861cb743ccf ']' 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74687 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74687 ']' 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74687 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74687 00:14:22.758 killing process with pid 74687 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74687' 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74687 00:14:22.758 [2024-11-20 08:47:53.612541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.758 08:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74687 00:14:22.758 [2024-11-20 08:47:53.612645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.758 [2024-11-20 08:47:53.612753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.758 [2024-11-20 08:47:53.612773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:23.325 [2024-11-20 08:47:53.957666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.312 08:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:24.312 00:14:24.312 real 0m9.527s 00:14:24.312 user 0m15.791s 00:14:24.312 sys 0m1.337s 00:14:24.312 ************************************ 00:14:24.312 08:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.312 08:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 END TEST raid_superblock_test 00:14:24.312 ************************************ 00:14:24.312 08:47:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:24.312 08:47:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:24.312 08:47:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.312 08:47:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 ************************************ 00:14:24.312 START TEST raid_read_error_test 00:14:24.312 ************************************ 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zxsSIVYnsN 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75187 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75187 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75187 ']' 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.312 08:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 [2024-11-20 08:47:55.163560] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:24.312 [2024-11-20 08:47:55.163741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75187 ] 00:14:24.570 [2024-11-20 08:47:55.347220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.570 [2024-11-20 08:47:55.484614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.829 [2024-11-20 08:47:55.695209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.829 [2024-11-20 08:47:55.695263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 BaseBdev1_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 true 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 [2024-11-20 08:47:56.185621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:25.396 [2024-11-20 08:47:56.185687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.396 [2024-11-20 08:47:56.185716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:25.396 [2024-11-20 08:47:56.185734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.396 [2024-11-20 08:47:56.188529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.396 [2024-11-20 08:47:56.188715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:25.396 BaseBdev1 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 BaseBdev2_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 true 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 [2024-11-20 08:47:56.241072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:25.396 [2024-11-20 08:47:56.241309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.396 [2024-11-20 08:47:56.241343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:25.396 [2024-11-20 08:47:56.241362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.396 [2024-11-20 08:47:56.244112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.396 [2024-11-20 08:47:56.244174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:25.396 BaseBdev2 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 BaseBdev3_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.396 true 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.396 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.655 [2024-11-20 08:47:56.310934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:25.655 [2024-11-20 08:47:56.311026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.655 [2024-11-20 08:47:56.311069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:25.655 [2024-11-20 08:47:56.311087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.655 [2024-11-20 08:47:56.313938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.656 [2024-11-20 08:47:56.314124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:25.656 BaseBdev3 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.656 BaseBdev4_malloc 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.656 true 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.656 [2024-11-20 08:47:56.367137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:25.656 [2024-11-20 08:47:56.367214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.656 [2024-11-20 08:47:56.367242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:25.656 [2024-11-20 08:47:56.367261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.656 [2024-11-20 08:47:56.370026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.656 [2024-11-20 08:47:56.370097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:25.656 BaseBdev4 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.656 [2024-11-20 08:47:56.375233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.656 [2024-11-20 08:47:56.377634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.656 [2024-11-20 08:47:56.377743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.656 [2024-11-20 08:47:56.377856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.656 [2024-11-20 08:47:56.378179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:25.656 [2024-11-20 08:47:56.378203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.656 [2024-11-20 08:47:56.378502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:25.656 [2024-11-20 08:47:56.378721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:25.656 [2024-11-20 08:47:56.378737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:25.656 [2024-11-20 08:47:56.378935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.656 "name": "raid_bdev1", 00:14:25.656 "uuid": "5e247761-22f4-4900-bf21-28c4f3eb6553", 00:14:25.656 "strip_size_kb": 0, 00:14:25.656 "state": "online", 00:14:25.656 "raid_level": "raid1", 00:14:25.656 "superblock": true, 00:14:25.656 "num_base_bdevs": 4, 00:14:25.656 "num_base_bdevs_discovered": 4, 00:14:25.656 "num_base_bdevs_operational": 4, 00:14:25.656 "base_bdevs_list": [ 00:14:25.656 { 00:14:25.656 "name": "BaseBdev1", 00:14:25.656 "uuid": "76c1ba65-3b55-5141-b9b6-9f059f96a1e2", 00:14:25.656 "is_configured": true, 00:14:25.656 "data_offset": 2048, 00:14:25.656 "data_size": 63488 00:14:25.656 }, 00:14:25.656 { 00:14:25.656 "name": "BaseBdev2", 00:14:25.656 "uuid": "cc246be5-2fa0-5d16-9f65-908cf4d3445d", 00:14:25.656 "is_configured": true, 00:14:25.656 "data_offset": 2048, 00:14:25.656 "data_size": 63488 00:14:25.656 }, 00:14:25.656 { 00:14:25.656 "name": "BaseBdev3", 00:14:25.656 "uuid": "16846bbf-3d57-5427-aa44-837cb21fb4c4", 00:14:25.656 "is_configured": true, 00:14:25.656 "data_offset": 2048, 00:14:25.656 "data_size": 63488 00:14:25.656 }, 00:14:25.656 { 00:14:25.656 "name": "BaseBdev4", 00:14:25.656 "uuid": "77e048f3-32eb-5d0b-bfd1-b2e6e2f20e0d", 00:14:25.656 "is_configured": true, 00:14:25.656 "data_offset": 2048, 00:14:25.656 "data_size": 63488 00:14:25.656 } 00:14:25.656 ] 00:14:25.656 }' 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.656 08:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.223 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:26.224 08:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:26.224 [2024-11-20 08:47:57.008856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.160 "name": "raid_bdev1", 00:14:27.160 "uuid": "5e247761-22f4-4900-bf21-28c4f3eb6553", 00:14:27.160 "strip_size_kb": 0, 00:14:27.160 "state": "online", 00:14:27.160 "raid_level": "raid1", 00:14:27.160 "superblock": true, 00:14:27.160 "num_base_bdevs": 4, 00:14:27.160 "num_base_bdevs_discovered": 4, 00:14:27.160 "num_base_bdevs_operational": 4, 00:14:27.160 "base_bdevs_list": [ 00:14:27.160 { 00:14:27.160 "name": "BaseBdev1", 00:14:27.160 "uuid": "76c1ba65-3b55-5141-b9b6-9f059f96a1e2", 00:14:27.160 "is_configured": true, 00:14:27.160 "data_offset": 2048, 00:14:27.160 "data_size": 63488 00:14:27.160 }, 00:14:27.160 { 00:14:27.160 "name": "BaseBdev2", 00:14:27.160 "uuid": "cc246be5-2fa0-5d16-9f65-908cf4d3445d", 00:14:27.160 "is_configured": true, 00:14:27.160 "data_offset": 2048, 00:14:27.160 "data_size": 63488 00:14:27.160 }, 00:14:27.160 { 00:14:27.160 "name": "BaseBdev3", 00:14:27.160 "uuid": "16846bbf-3d57-5427-aa44-837cb21fb4c4", 00:14:27.160 "is_configured": true, 00:14:27.160 "data_offset": 2048, 00:14:27.160 "data_size": 63488 00:14:27.160 }, 00:14:27.160 { 00:14:27.160 "name": "BaseBdev4", 00:14:27.160 "uuid": "77e048f3-32eb-5d0b-bfd1-b2e6e2f20e0d", 00:14:27.160 "is_configured": true, 00:14:27.160 "data_offset": 2048, 00:14:27.160 "data_size": 63488 00:14:27.160 } 00:14:27.160 ] 00:14:27.160 }' 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.160 08:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.728 [2024-11-20 08:47:58.450854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.728 [2024-11-20 08:47:58.451067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.728 { 00:14:27.728 "results": [ 00:14:27.728 { 00:14:27.728 "job": "raid_bdev1", 00:14:27.728 "core_mask": "0x1", 00:14:27.728 "workload": "randrw", 00:14:27.728 "percentage": 50, 00:14:27.728 "status": "finished", 00:14:27.728 "queue_depth": 1, 00:14:27.728 "io_size": 131072, 00:14:27.728 "runtime": 1.439697, 00:14:27.728 "iops": 7830.119809932229, 00:14:27.728 "mibps": 978.7649762415286, 00:14:27.728 "io_failed": 0, 00:14:27.728 "io_timeout": 0, 00:14:27.728 "avg_latency_us": 123.53373386127755, 00:14:27.728 "min_latency_us": 39.33090909090909, 00:14:27.728 "max_latency_us": 1936.290909090909 00:14:27.728 } 00:14:27.728 ], 00:14:27.728 "core_count": 1 00:14:27.728 } 00:14:27.728 [2024-11-20 08:47:58.454727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.728 [2024-11-20 08:47:58.454800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.728 [2024-11-20 08:47:58.455024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.728 [2024-11-20 08:47:58.455050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75187 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75187 ']' 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75187 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75187 00:14:27.728 killing process with pid 75187 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75187' 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75187 00:14:27.728 [2024-11-20 08:47:58.490759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.728 08:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75187 00:14:27.988 [2024-11-20 08:47:58.780888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zxsSIVYnsN 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:29.366 00:14:29.366 real 0m4.851s 00:14:29.366 user 0m5.975s 00:14:29.366 sys 0m0.624s 00:14:29.366 ************************************ 00:14:29.366 END TEST raid_read_error_test 00:14:29.366 ************************************ 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.366 08:47:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.366 08:47:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:29.366 08:47:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:29.366 08:47:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.366 08:47:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.366 ************************************ 00:14:29.366 START TEST raid_write_error_test 00:14:29.366 ************************************ 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rDiTSBPiRb 00:14:29.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75333 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75333 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75333 ']' 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.366 08:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.366 [2024-11-20 08:48:00.068659] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:29.366 [2024-11-20 08:48:00.069025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75333 ] 00:14:29.366 [2024-11-20 08:48:00.241221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.626 [2024-11-20 08:48:00.368346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.885 [2024-11-20 08:48:00.573309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.885 [2024-11-20 08:48:00.573352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.454 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.454 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:30.454 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 BaseBdev1_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 true 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 [2024-11-20 08:48:01.154173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:30.455 [2024-11-20 08:48:01.154239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.455 [2024-11-20 08:48:01.154268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:30.455 [2024-11-20 08:48:01.154285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.455 [2024-11-20 08:48:01.157037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.455 [2024-11-20 08:48:01.157261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.455 BaseBdev1 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 BaseBdev2_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 true 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 [2024-11-20 08:48:01.210139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:30.455 [2024-11-20 08:48:01.210217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.455 [2024-11-20 08:48:01.210242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:30.455 [2024-11-20 08:48:01.210259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.455 [2024-11-20 08:48:01.213003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.455 [2024-11-20 08:48:01.213209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.455 BaseBdev2 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 BaseBdev3_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 true 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 [2024-11-20 08:48:01.274907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:30.455 [2024-11-20 08:48:01.274985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.455 [2024-11-20 08:48:01.275011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:30.455 [2024-11-20 08:48:01.275028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.455 [2024-11-20 08:48:01.277833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.455 [2024-11-20 08:48:01.278070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.455 BaseBdev3 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 BaseBdev4_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 true 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 [2024-11-20 08:48:01.330878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:30.455 [2024-11-20 08:48:01.331104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.455 [2024-11-20 08:48:01.331140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:30.455 [2024-11-20 08:48:01.331177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.455 [2024-11-20 08:48:01.333942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.455 [2024-11-20 08:48:01.333997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.455 BaseBdev4 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 [2024-11-20 08:48:01.338957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.455 [2024-11-20 08:48:01.341515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.455 [2024-11-20 08:48:01.341627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.455 [2024-11-20 08:48:01.341729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.455 [2024-11-20 08:48:01.342027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:30.455 [2024-11-20 08:48:01.342059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.455 [2024-11-20 08:48:01.342377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:30.455 [2024-11-20 08:48:01.342605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:30.455 [2024-11-20 08:48:01.342622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:30.455 [2024-11-20 08:48:01.342853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.455 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.713 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.713 "name": "raid_bdev1", 00:14:30.713 "uuid": "00a99588-a300-411f-855b-8d27f5feb7b9", 00:14:30.713 "strip_size_kb": 0, 00:14:30.713 "state": "online", 00:14:30.713 "raid_level": "raid1", 00:14:30.713 "superblock": true, 00:14:30.713 "num_base_bdevs": 4, 00:14:30.713 "num_base_bdevs_discovered": 4, 00:14:30.713 "num_base_bdevs_operational": 4, 00:14:30.713 "base_bdevs_list": [ 00:14:30.713 { 00:14:30.713 "name": "BaseBdev1", 00:14:30.713 "uuid": "b4ca799b-bc75-571c-ba31-93508ca213f2", 00:14:30.713 "is_configured": true, 00:14:30.713 "data_offset": 2048, 00:14:30.713 "data_size": 63488 00:14:30.713 }, 00:14:30.713 { 00:14:30.713 "name": "BaseBdev2", 00:14:30.713 "uuid": "10fb717a-c937-5667-850b-e422b301f1a3", 00:14:30.713 "is_configured": true, 00:14:30.713 "data_offset": 2048, 00:14:30.713 "data_size": 63488 00:14:30.713 }, 00:14:30.713 { 00:14:30.713 "name": "BaseBdev3", 00:14:30.713 "uuid": "bc9a53de-eda1-5c58-bd4e-386da82f0c62", 00:14:30.713 "is_configured": true, 00:14:30.713 "data_offset": 2048, 00:14:30.713 "data_size": 63488 00:14:30.713 }, 00:14:30.713 { 00:14:30.713 "name": "BaseBdev4", 00:14:30.713 "uuid": "bce94895-1c48-5f62-b41a-20cd52554af2", 00:14:30.713 "is_configured": true, 00:14:30.713 "data_offset": 2048, 00:14:30.713 "data_size": 63488 00:14:30.713 } 00:14:30.713 ] 00:14:30.713 }' 00:14:30.713 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.713 08:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.972 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:30.972 08:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:31.231 [2024-11-20 08:48:01.948632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.168 [2024-11-20 08:48:02.835046] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:32.168 [2024-11-20 08:48:02.835124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.168 [2024-11-20 08:48:02.835412] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.168 "name": "raid_bdev1", 00:14:32.168 "uuid": "00a99588-a300-411f-855b-8d27f5feb7b9", 00:14:32.168 "strip_size_kb": 0, 00:14:32.168 "state": "online", 00:14:32.168 "raid_level": "raid1", 00:14:32.168 "superblock": true, 00:14:32.168 "num_base_bdevs": 4, 00:14:32.168 "num_base_bdevs_discovered": 3, 00:14:32.168 "num_base_bdevs_operational": 3, 00:14:32.168 "base_bdevs_list": [ 00:14:32.168 { 00:14:32.168 "name": null, 00:14:32.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.168 "is_configured": false, 00:14:32.168 "data_offset": 0, 00:14:32.168 "data_size": 63488 00:14:32.168 }, 00:14:32.168 { 00:14:32.168 "name": "BaseBdev2", 00:14:32.168 "uuid": "10fb717a-c937-5667-850b-e422b301f1a3", 00:14:32.168 "is_configured": true, 00:14:32.168 "data_offset": 2048, 00:14:32.168 "data_size": 63488 00:14:32.168 }, 00:14:32.168 { 00:14:32.168 "name": "BaseBdev3", 00:14:32.168 "uuid": "bc9a53de-eda1-5c58-bd4e-386da82f0c62", 00:14:32.168 "is_configured": true, 00:14:32.168 "data_offset": 2048, 00:14:32.168 "data_size": 63488 00:14:32.168 }, 00:14:32.168 { 00:14:32.168 "name": "BaseBdev4", 00:14:32.168 "uuid": "bce94895-1c48-5f62-b41a-20cd52554af2", 00:14:32.168 "is_configured": true, 00:14:32.168 "data_offset": 2048, 00:14:32.168 "data_size": 63488 00:14:32.168 } 00:14:32.168 ] 00:14:32.168 }' 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.168 08:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.734 [2024-11-20 08:48:03.364561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.734 [2024-11-20 08:48:03.364756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.734 { 00:14:32.734 "results": [ 00:14:32.734 { 00:14:32.734 "job": "raid_bdev1", 00:14:32.734 "core_mask": "0x1", 00:14:32.734 "workload": "randrw", 00:14:32.734 "percentage": 50, 00:14:32.734 "status": "finished", 00:14:32.734 "queue_depth": 1, 00:14:32.734 "io_size": 131072, 00:14:32.734 "runtime": 1.413406, 00:14:32.734 "iops": 8364.192595758048, 00:14:32.734 "mibps": 1045.524074469756, 00:14:32.734 "io_failed": 0, 00:14:32.734 "io_timeout": 0, 00:14:32.734 "avg_latency_us": 115.35086325956229, 00:14:32.734 "min_latency_us": 39.33090909090909, 00:14:32.734 "max_latency_us": 1861.8181818181818 00:14:32.734 } 00:14:32.734 ], 00:14:32.734 "core_count": 1 00:14:32.734 } 00:14:32.734 [2024-11-20 08:48:03.368332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.734 [2024-11-20 08:48:03.368391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.734 [2024-11-20 08:48:03.368661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.734 [2024-11-20 08:48:03.368684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75333 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75333 ']' 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75333 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75333 00:14:32.734 killing process with pid 75333 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75333' 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75333 00:14:32.734 [2024-11-20 08:48:03.407790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.734 08:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75333 00:14:32.993 [2024-11-20 08:48:03.693901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rDiTSBPiRb 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:33.928 ************************************ 00:14:33.928 END TEST raid_write_error_test 00:14:33.928 ************************************ 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:33.928 00:14:33.928 real 0m4.843s 00:14:33.928 user 0m6.013s 00:14:33.928 sys 0m0.595s 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.928 08:48:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.928 08:48:04 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:33.928 08:48:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:33.928 08:48:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:33.928 08:48:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:33.928 08:48:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.928 08:48:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.928 ************************************ 00:14:33.928 START TEST raid_rebuild_test 00:14:33.928 ************************************ 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.928 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75478 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75478 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75478 ']' 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.929 08:48:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.187 [2024-11-20 08:48:04.939592] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:34.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.187 Zero copy mechanism will not be used. 00:14:34.187 [2024-11-20 08:48:04.939937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75478 ] 00:14:34.446 [2024-11-20 08:48:05.125736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.446 [2024-11-20 08:48:05.246422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.706 [2024-11-20 08:48:05.448633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.706 [2024-11-20 08:48:05.448927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 BaseBdev1_malloc 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 [2024-11-20 08:48:05.954096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.275 [2024-11-20 08:48:05.954201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.275 [2024-11-20 08:48:05.954237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:35.275 [2024-11-20 08:48:05.954257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.275 [2024-11-20 08:48:05.957059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.275 [2024-11-20 08:48:05.957300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.275 BaseBdev1 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 BaseBdev2_malloc 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 [2024-11-20 08:48:06.009761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:35.275 [2024-11-20 08:48:06.009826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.275 [2024-11-20 08:48:06.009855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:35.275 [2024-11-20 08:48:06.009875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.275 [2024-11-20 08:48:06.012670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.275 [2024-11-20 08:48:06.012734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.275 BaseBdev2 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 spare_malloc 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 spare_delay 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 [2024-11-20 08:48:06.088428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.275 [2024-11-20 08:48:06.088637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.275 [2024-11-20 08:48:06.088808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:35.275 [2024-11-20 08:48:06.088946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.275 [2024-11-20 08:48:06.091786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.275 [2024-11-20 08:48:06.091989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.275 spare 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 [2024-11-20 08:48:06.096540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.275 [2024-11-20 08:48:06.098903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.275 [2024-11-20 08:48:06.099019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:35.275 [2024-11-20 08:48:06.099041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:35.275 [2024-11-20 08:48:06.099400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:35.275 [2024-11-20 08:48:06.099600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:35.275 [2024-11-20 08:48:06.099625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:35.275 [2024-11-20 08:48:06.099827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.275 "name": "raid_bdev1", 00:14:35.275 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:35.275 "strip_size_kb": 0, 00:14:35.275 "state": "online", 00:14:35.275 "raid_level": "raid1", 00:14:35.275 "superblock": false, 00:14:35.275 "num_base_bdevs": 2, 00:14:35.275 "num_base_bdevs_discovered": 2, 00:14:35.275 "num_base_bdevs_operational": 2, 00:14:35.275 "base_bdevs_list": [ 00:14:35.275 { 00:14:35.275 "name": "BaseBdev1", 00:14:35.275 "uuid": "5d7390f8-ad61-5d98-8d01-a28bca7de5d8", 00:14:35.275 "is_configured": true, 00:14:35.275 "data_offset": 0, 00:14:35.275 "data_size": 65536 00:14:35.275 }, 00:14:35.275 { 00:14:35.275 "name": "BaseBdev2", 00:14:35.275 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:35.275 "is_configured": true, 00:14:35.275 "data_offset": 0, 00:14:35.275 "data_size": 65536 00:14:35.275 } 00:14:35.275 ] 00:14:35.275 }' 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.275 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.843 [2024-11-20 08:48:06.612995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:35.843 08:48:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.844 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:36.102 [2024-11-20 08:48:06.948831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.102 /dev/nbd0 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.102 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:36.103 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:36.103 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.103 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.103 08:48:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.103 1+0 records in 00:14:36.103 1+0 records out 00:14:36.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586974 s, 7.0 MB/s 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:36.103 08:48:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:42.670 65536+0 records in 00:14:42.670 65536+0 records out 00:14:42.670 33554432 bytes (34 MB, 32 MiB) copied, 6.33012 s, 5.3 MB/s 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.670 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:42.929 [2024-11-20 08:48:13.634657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.929 [2024-11-20 08:48:13.642837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.929 "name": "raid_bdev1", 00:14:42.929 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:42.929 "strip_size_kb": 0, 00:14:42.929 "state": "online", 00:14:42.929 "raid_level": "raid1", 00:14:42.929 "superblock": false, 00:14:42.929 "num_base_bdevs": 2, 00:14:42.929 "num_base_bdevs_discovered": 1, 00:14:42.929 "num_base_bdevs_operational": 1, 00:14:42.929 "base_bdevs_list": [ 00:14:42.929 { 00:14:42.929 "name": null, 00:14:42.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.929 "is_configured": false, 00:14:42.929 "data_offset": 0, 00:14:42.929 "data_size": 65536 00:14:42.929 }, 00:14:42.929 { 00:14:42.929 "name": "BaseBdev2", 00:14:42.929 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:42.929 "is_configured": true, 00:14:42.929 "data_offset": 0, 00:14:42.929 "data_size": 65536 00:14:42.929 } 00:14:42.929 ] 00:14:42.929 }' 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.929 08:48:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.188 08:48:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:43.188 08:48:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.188 08:48:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.447 [2024-11-20 08:48:14.103014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.447 [2024-11-20 08:48:14.119921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:43.447 08:48:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.447 08:48:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:43.447 [2024-11-20 08:48:14.122460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.386 "name": "raid_bdev1", 00:14:44.386 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:44.386 "strip_size_kb": 0, 00:14:44.386 "state": "online", 00:14:44.386 "raid_level": "raid1", 00:14:44.386 "superblock": false, 00:14:44.386 "num_base_bdevs": 2, 00:14:44.386 "num_base_bdevs_discovered": 2, 00:14:44.386 "num_base_bdevs_operational": 2, 00:14:44.386 "process": { 00:14:44.386 "type": "rebuild", 00:14:44.386 "target": "spare", 00:14:44.386 "progress": { 00:14:44.386 "blocks": 20480, 00:14:44.386 "percent": 31 00:14:44.386 } 00:14:44.386 }, 00:14:44.386 "base_bdevs_list": [ 00:14:44.386 { 00:14:44.386 "name": "spare", 00:14:44.386 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:44.386 "is_configured": true, 00:14:44.386 "data_offset": 0, 00:14:44.386 "data_size": 65536 00:14:44.386 }, 00:14:44.386 { 00:14:44.386 "name": "BaseBdev2", 00:14:44.386 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:44.386 "is_configured": true, 00:14:44.386 "data_offset": 0, 00:14:44.386 "data_size": 65536 00:14:44.386 } 00:14:44.386 ] 00:14:44.386 }' 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.386 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.386 [2024-11-20 08:48:15.275485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.644 [2024-11-20 08:48:15.330602] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.644 [2024-11-20 08:48:15.330696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.644 [2024-11-20 08:48:15.330720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.644 [2024-11-20 08:48:15.330736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.644 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.645 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.645 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.645 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.645 "name": "raid_bdev1", 00:14:44.645 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:44.645 "strip_size_kb": 0, 00:14:44.645 "state": "online", 00:14:44.645 "raid_level": "raid1", 00:14:44.645 "superblock": false, 00:14:44.645 "num_base_bdevs": 2, 00:14:44.645 "num_base_bdevs_discovered": 1, 00:14:44.645 "num_base_bdevs_operational": 1, 00:14:44.645 "base_bdevs_list": [ 00:14:44.645 { 00:14:44.645 "name": null, 00:14:44.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.645 "is_configured": false, 00:14:44.645 "data_offset": 0, 00:14:44.645 "data_size": 65536 00:14:44.645 }, 00:14:44.645 { 00:14:44.645 "name": "BaseBdev2", 00:14:44.645 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:44.645 "is_configured": true, 00:14:44.645 "data_offset": 0, 00:14:44.645 "data_size": 65536 00:14:44.645 } 00:14:44.645 ] 00:14:44.645 }' 00:14:44.645 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.645 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.964 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.222 08:48:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.222 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.222 "name": "raid_bdev1", 00:14:45.222 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:45.222 "strip_size_kb": 0, 00:14:45.222 "state": "online", 00:14:45.222 "raid_level": "raid1", 00:14:45.222 "superblock": false, 00:14:45.222 "num_base_bdevs": 2, 00:14:45.222 "num_base_bdevs_discovered": 1, 00:14:45.222 "num_base_bdevs_operational": 1, 00:14:45.222 "base_bdevs_list": [ 00:14:45.222 { 00:14:45.222 "name": null, 00:14:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.222 "is_configured": false, 00:14:45.222 "data_offset": 0, 00:14:45.222 "data_size": 65536 00:14:45.222 }, 00:14:45.222 { 00:14:45.222 "name": "BaseBdev2", 00:14:45.222 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:45.222 "is_configured": true, 00:14:45.222 "data_offset": 0, 00:14:45.222 "data_size": 65536 00:14:45.222 } 00:14:45.222 ] 00:14:45.222 }' 00:14:45.222 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.222 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.222 08:48:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.222 08:48:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.222 08:48:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.222 08:48:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.222 08:48:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.222 [2024-11-20 08:48:16.022412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.223 [2024-11-20 08:48:16.038496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:45.223 08:48:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.223 08:48:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:45.223 [2024-11-20 08:48:16.040965] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.159 08:48:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.418 "name": "raid_bdev1", 00:14:46.418 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:46.418 "strip_size_kb": 0, 00:14:46.418 "state": "online", 00:14:46.418 "raid_level": "raid1", 00:14:46.418 "superblock": false, 00:14:46.418 "num_base_bdevs": 2, 00:14:46.418 "num_base_bdevs_discovered": 2, 00:14:46.418 "num_base_bdevs_operational": 2, 00:14:46.418 "process": { 00:14:46.418 "type": "rebuild", 00:14:46.418 "target": "spare", 00:14:46.418 "progress": { 00:14:46.418 "blocks": 20480, 00:14:46.418 "percent": 31 00:14:46.418 } 00:14:46.418 }, 00:14:46.418 "base_bdevs_list": [ 00:14:46.418 { 00:14:46.418 "name": "spare", 00:14:46.418 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:46.418 "is_configured": true, 00:14:46.418 "data_offset": 0, 00:14:46.418 "data_size": 65536 00:14:46.418 }, 00:14:46.418 { 00:14:46.418 "name": "BaseBdev2", 00:14:46.418 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:46.418 "is_configured": true, 00:14:46.418 "data_offset": 0, 00:14:46.418 "data_size": 65536 00:14:46.418 } 00:14:46.418 ] 00:14:46.418 }' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.418 "name": "raid_bdev1", 00:14:46.418 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:46.418 "strip_size_kb": 0, 00:14:46.418 "state": "online", 00:14:46.418 "raid_level": "raid1", 00:14:46.418 "superblock": false, 00:14:46.418 "num_base_bdevs": 2, 00:14:46.418 "num_base_bdevs_discovered": 2, 00:14:46.418 "num_base_bdevs_operational": 2, 00:14:46.418 "process": { 00:14:46.418 "type": "rebuild", 00:14:46.418 "target": "spare", 00:14:46.418 "progress": { 00:14:46.418 "blocks": 22528, 00:14:46.418 "percent": 34 00:14:46.418 } 00:14:46.418 }, 00:14:46.418 "base_bdevs_list": [ 00:14:46.418 { 00:14:46.418 "name": "spare", 00:14:46.418 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:46.418 "is_configured": true, 00:14:46.418 "data_offset": 0, 00:14:46.418 "data_size": 65536 00:14:46.418 }, 00:14:46.418 { 00:14:46.418 "name": "BaseBdev2", 00:14:46.418 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:46.418 "is_configured": true, 00:14:46.418 "data_offset": 0, 00:14:46.418 "data_size": 65536 00:14:46.418 } 00:14:46.418 ] 00:14:46.418 }' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.418 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.675 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.675 08:48:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.609 "name": "raid_bdev1", 00:14:47.609 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:47.609 "strip_size_kb": 0, 00:14:47.609 "state": "online", 00:14:47.609 "raid_level": "raid1", 00:14:47.609 "superblock": false, 00:14:47.609 "num_base_bdevs": 2, 00:14:47.609 "num_base_bdevs_discovered": 2, 00:14:47.609 "num_base_bdevs_operational": 2, 00:14:47.609 "process": { 00:14:47.609 "type": "rebuild", 00:14:47.609 "target": "spare", 00:14:47.609 "progress": { 00:14:47.609 "blocks": 47104, 00:14:47.609 "percent": 71 00:14:47.609 } 00:14:47.609 }, 00:14:47.609 "base_bdevs_list": [ 00:14:47.609 { 00:14:47.609 "name": "spare", 00:14:47.609 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:47.609 "is_configured": true, 00:14:47.609 "data_offset": 0, 00:14:47.609 "data_size": 65536 00:14:47.609 }, 00:14:47.609 { 00:14:47.609 "name": "BaseBdev2", 00:14:47.609 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:47.609 "is_configured": true, 00:14:47.609 "data_offset": 0, 00:14:47.609 "data_size": 65536 00:14:47.609 } 00:14:47.609 ] 00:14:47.609 }' 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.609 08:48:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.542 [2024-11-20 08:48:19.262650] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:48.542 [2024-11-20 08:48:19.262987] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:48.542 [2024-11-20 08:48:19.263074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.800 "name": "raid_bdev1", 00:14:48.800 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:48.800 "strip_size_kb": 0, 00:14:48.800 "state": "online", 00:14:48.800 "raid_level": "raid1", 00:14:48.800 "superblock": false, 00:14:48.800 "num_base_bdevs": 2, 00:14:48.800 "num_base_bdevs_discovered": 2, 00:14:48.800 "num_base_bdevs_operational": 2, 00:14:48.800 "base_bdevs_list": [ 00:14:48.800 { 00:14:48.800 "name": "spare", 00:14:48.800 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:48.800 "is_configured": true, 00:14:48.800 "data_offset": 0, 00:14:48.800 "data_size": 65536 00:14:48.800 }, 00:14:48.800 { 00:14:48.800 "name": "BaseBdev2", 00:14:48.800 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:48.800 "is_configured": true, 00:14:48.800 "data_offset": 0, 00:14:48.800 "data_size": 65536 00:14:48.800 } 00:14:48.800 ] 00:14:48.800 }' 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.800 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.059 "name": "raid_bdev1", 00:14:49.059 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:49.059 "strip_size_kb": 0, 00:14:49.059 "state": "online", 00:14:49.059 "raid_level": "raid1", 00:14:49.059 "superblock": false, 00:14:49.059 "num_base_bdevs": 2, 00:14:49.059 "num_base_bdevs_discovered": 2, 00:14:49.059 "num_base_bdevs_operational": 2, 00:14:49.059 "base_bdevs_list": [ 00:14:49.059 { 00:14:49.059 "name": "spare", 00:14:49.059 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:49.059 "is_configured": true, 00:14:49.059 "data_offset": 0, 00:14:49.059 "data_size": 65536 00:14:49.059 }, 00:14:49.059 { 00:14:49.059 "name": "BaseBdev2", 00:14:49.059 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:49.059 "is_configured": true, 00:14:49.059 "data_offset": 0, 00:14:49.059 "data_size": 65536 00:14:49.059 } 00:14:49.059 ] 00:14:49.059 }' 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.059 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.059 "name": "raid_bdev1", 00:14:49.059 "uuid": "33b44735-eb2f-49c7-b6da-3d003b652bc8", 00:14:49.059 "strip_size_kb": 0, 00:14:49.059 "state": "online", 00:14:49.059 "raid_level": "raid1", 00:14:49.059 "superblock": false, 00:14:49.059 "num_base_bdevs": 2, 00:14:49.059 "num_base_bdevs_discovered": 2, 00:14:49.059 "num_base_bdevs_operational": 2, 00:14:49.059 "base_bdevs_list": [ 00:14:49.059 { 00:14:49.059 "name": "spare", 00:14:49.059 "uuid": "32b51907-fff1-50ab-9bdd-5fd88561dbe0", 00:14:49.059 "is_configured": true, 00:14:49.059 "data_offset": 0, 00:14:49.059 "data_size": 65536 00:14:49.059 }, 00:14:49.059 { 00:14:49.059 "name": "BaseBdev2", 00:14:49.059 "uuid": "70023122-08ac-5a41-8b60-4b26dee24585", 00:14:49.059 "is_configured": true, 00:14:49.059 "data_offset": 0, 00:14:49.060 "data_size": 65536 00:14:49.060 } 00:14:49.060 ] 00:14:49.060 }' 00:14:49.060 08:48:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.060 08:48:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 [2024-11-20 08:48:20.330373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.662 [2024-11-20 08:48:20.330534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.662 [2024-11-20 08:48:20.330745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.662 [2024-11-20 08:48:20.330957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.662 [2024-11-20 08:48:20.331198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.662 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:49.920 /dev/nbd0 00:14:49.920 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.921 1+0 records in 00:14:49.921 1+0 records out 00:14:49.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030739 s, 13.3 MB/s 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.921 08:48:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:50.179 /dev/nbd1 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.179 1+0 records in 00:14:50.179 1+0 records out 00:14:50.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392446 s, 10.4 MB/s 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.179 08:48:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.438 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.697 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75478 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75478 ']' 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75478 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75478 00:14:50.955 killing process with pid 75478 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75478' 00:14:50.955 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75478 00:14:50.955 Received shutdown signal, test time was about 60.000000 seconds 00:14:50.955 00:14:50.955 Latency(us) 00:14:50.955 [2024-11-20T08:48:21.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.955 [2024-11-20T08:48:21.871Z] =================================================================================================================== 00:14:50.955 [2024-11-20T08:48:21.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.956 08:48:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75478 00:14:50.956 [2024-11-20 08:48:21.825018] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.214 [2024-11-20 08:48:22.085855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:52.588 00:14:52.588 real 0m18.267s 00:14:52.588 user 0m20.922s 00:14:52.588 sys 0m3.447s 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.588 ************************************ 00:14:52.588 END TEST raid_rebuild_test 00:14:52.588 ************************************ 00:14:52.588 08:48:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:52.588 08:48:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:52.588 08:48:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.588 08:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.588 ************************************ 00:14:52.588 START TEST raid_rebuild_test_sb 00:14:52.588 ************************************ 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:52.588 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75923 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75923 00:14:52.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75923 ']' 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.589 08:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.589 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:52.589 Zero copy mechanism will not be used. 00:14:52.589 [2024-11-20 08:48:23.263656] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:14:52.589 [2024-11-20 08:48:23.263831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75923 ] 00:14:52.589 [2024-11-20 08:48:23.446691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.847 [2024-11-20 08:48:23.572130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.105 [2024-11-20 08:48:23.774601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.105 [2024-11-20 08:48:23.774671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.364 BaseBdev1_malloc 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.364 [2024-11-20 08:48:24.260314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:53.364 [2024-11-20 08:48:24.260396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.364 [2024-11-20 08:48:24.260427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:53.364 [2024-11-20 08:48:24.260445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.364 [2024-11-20 08:48:24.263182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.364 [2024-11-20 08:48:24.263363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:53.364 BaseBdev1 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.364 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.622 BaseBdev2_malloc 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.622 [2024-11-20 08:48:24.315843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:53.622 [2024-11-20 08:48:24.315915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.622 [2024-11-20 08:48:24.315942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:53.622 [2024-11-20 08:48:24.315962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.622 [2024-11-20 08:48:24.318654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.622 [2024-11-20 08:48:24.318836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:53.622 BaseBdev2 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.622 spare_malloc 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.622 spare_delay 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.622 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.622 [2024-11-20 08:48:24.384428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:53.622 [2024-11-20 08:48:24.384632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.622 [2024-11-20 08:48:24.384671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:53.622 [2024-11-20 08:48:24.384691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.622 [2024-11-20 08:48:24.387454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.622 [2024-11-20 08:48:24.387509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:53.623 spare 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.623 [2024-11-20 08:48:24.392507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.623 [2024-11-20 08:48:24.394829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.623 [2024-11-20 08:48:24.395051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:53.623 [2024-11-20 08:48:24.395077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:53.623 [2024-11-20 08:48:24.395402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:53.623 [2024-11-20 08:48:24.395613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:53.623 [2024-11-20 08:48:24.395629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:53.623 [2024-11-20 08:48:24.395809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.623 "name": "raid_bdev1", 00:14:53.623 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:14:53.623 "strip_size_kb": 0, 00:14:53.623 "state": "online", 00:14:53.623 "raid_level": "raid1", 00:14:53.623 "superblock": true, 00:14:53.623 "num_base_bdevs": 2, 00:14:53.623 "num_base_bdevs_discovered": 2, 00:14:53.623 "num_base_bdevs_operational": 2, 00:14:53.623 "base_bdevs_list": [ 00:14:53.623 { 00:14:53.623 "name": "BaseBdev1", 00:14:53.623 "uuid": "8f3d0434-faed-5427-a923-f2c311fb4f8b", 00:14:53.623 "is_configured": true, 00:14:53.623 "data_offset": 2048, 00:14:53.623 "data_size": 63488 00:14:53.623 }, 00:14:53.623 { 00:14:53.623 "name": "BaseBdev2", 00:14:53.623 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:14:53.623 "is_configured": true, 00:14:53.623 "data_offset": 2048, 00:14:53.623 "data_size": 63488 00:14:53.623 } 00:14:53.623 ] 00:14:53.623 }' 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.623 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.190 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:54.190 08:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.190 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.190 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.191 [2024-11-20 08:48:24.960999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.191 08:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.191 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:54.449 [2024-11-20 08:48:25.324795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:54.449 /dev/nbd0 00:14:54.449 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.708 1+0 records in 00:14:54.708 1+0 records out 00:14:54.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356158 s, 11.5 MB/s 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:54.708 08:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:01.360 63488+0 records in 00:15:01.360 63488+0 records out 00:15:01.360 32505856 bytes (33 MB, 31 MiB) copied, 6.14956 s, 5.3 MB/s 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:01.360 [2024-11-20 08:48:31.831792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.360 [2024-11-20 08:48:31.839880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.360 "name": "raid_bdev1", 00:15:01.360 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:01.360 "strip_size_kb": 0, 00:15:01.360 "state": "online", 00:15:01.360 "raid_level": "raid1", 00:15:01.360 "superblock": true, 00:15:01.360 "num_base_bdevs": 2, 00:15:01.360 "num_base_bdevs_discovered": 1, 00:15:01.360 "num_base_bdevs_operational": 1, 00:15:01.360 "base_bdevs_list": [ 00:15:01.360 { 00:15:01.360 "name": null, 00:15:01.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.360 "is_configured": false, 00:15:01.360 "data_offset": 0, 00:15:01.360 "data_size": 63488 00:15:01.360 }, 00:15:01.360 { 00:15:01.360 "name": "BaseBdev2", 00:15:01.360 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:01.360 "is_configured": true, 00:15:01.360 "data_offset": 2048, 00:15:01.360 "data_size": 63488 00:15:01.360 } 00:15:01.360 ] 00:15:01.360 }' 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.360 08:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.618 08:48:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.618 08:48:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.619 08:48:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.619 [2024-11-20 08:48:32.360083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.619 [2024-11-20 08:48:32.376357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:01.619 08:48:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.619 08:48:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:01.619 [2024-11-20 08:48:32.378764] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.553 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.554 "name": "raid_bdev1", 00:15:02.554 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:02.554 "strip_size_kb": 0, 00:15:02.554 "state": "online", 00:15:02.554 "raid_level": "raid1", 00:15:02.554 "superblock": true, 00:15:02.554 "num_base_bdevs": 2, 00:15:02.554 "num_base_bdevs_discovered": 2, 00:15:02.554 "num_base_bdevs_operational": 2, 00:15:02.554 "process": { 00:15:02.554 "type": "rebuild", 00:15:02.554 "target": "spare", 00:15:02.554 "progress": { 00:15:02.554 "blocks": 20480, 00:15:02.554 "percent": 32 00:15:02.554 } 00:15:02.554 }, 00:15:02.554 "base_bdevs_list": [ 00:15:02.554 { 00:15:02.554 "name": "spare", 00:15:02.554 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:02.554 "is_configured": true, 00:15:02.554 "data_offset": 2048, 00:15:02.554 "data_size": 63488 00:15:02.554 }, 00:15:02.554 { 00:15:02.554 "name": "BaseBdev2", 00:15:02.554 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:02.554 "is_configured": true, 00:15:02.554 "data_offset": 2048, 00:15:02.554 "data_size": 63488 00:15:02.554 } 00:15:02.554 ] 00:15:02.554 }' 00:15:02.554 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.813 [2024-11-20 08:48:33.531870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.813 [2024-11-20 08:48:33.587030] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.813 [2024-11-20 08:48:33.587296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.813 [2024-11-20 08:48:33.587325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.813 [2024-11-20 08:48:33.587343] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.813 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.813 "name": "raid_bdev1", 00:15:02.813 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:02.813 "strip_size_kb": 0, 00:15:02.813 "state": "online", 00:15:02.813 "raid_level": "raid1", 00:15:02.813 "superblock": true, 00:15:02.813 "num_base_bdevs": 2, 00:15:02.813 "num_base_bdevs_discovered": 1, 00:15:02.813 "num_base_bdevs_operational": 1, 00:15:02.813 "base_bdevs_list": [ 00:15:02.813 { 00:15:02.813 "name": null, 00:15:02.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.813 "is_configured": false, 00:15:02.813 "data_offset": 0, 00:15:02.813 "data_size": 63488 00:15:02.813 }, 00:15:02.813 { 00:15:02.813 "name": "BaseBdev2", 00:15:02.813 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:02.813 "is_configured": true, 00:15:02.813 "data_offset": 2048, 00:15:02.814 "data_size": 63488 00:15:02.814 } 00:15:02.814 ] 00:15:02.814 }' 00:15:02.814 08:48:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.814 08:48:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.382 "name": "raid_bdev1", 00:15:03.382 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:03.382 "strip_size_kb": 0, 00:15:03.382 "state": "online", 00:15:03.382 "raid_level": "raid1", 00:15:03.382 "superblock": true, 00:15:03.382 "num_base_bdevs": 2, 00:15:03.382 "num_base_bdevs_discovered": 1, 00:15:03.382 "num_base_bdevs_operational": 1, 00:15:03.382 "base_bdevs_list": [ 00:15:03.382 { 00:15:03.382 "name": null, 00:15:03.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.382 "is_configured": false, 00:15:03.382 "data_offset": 0, 00:15:03.382 "data_size": 63488 00:15:03.382 }, 00:15:03.382 { 00:15:03.382 "name": "BaseBdev2", 00:15:03.382 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:03.382 "is_configured": true, 00:15:03.382 "data_offset": 2048, 00:15:03.382 "data_size": 63488 00:15:03.382 } 00:15:03.382 ] 00:15:03.382 }' 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.382 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.383 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.383 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.383 08:48:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.383 08:48:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.383 [2024-11-20 08:48:34.258993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.383 [2024-11-20 08:48:34.274364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:03.383 08:48:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.383 08:48:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:03.383 [2024-11-20 08:48:34.276766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.756 "name": "raid_bdev1", 00:15:04.756 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:04.756 "strip_size_kb": 0, 00:15:04.756 "state": "online", 00:15:04.756 "raid_level": "raid1", 00:15:04.756 "superblock": true, 00:15:04.756 "num_base_bdevs": 2, 00:15:04.756 "num_base_bdevs_discovered": 2, 00:15:04.756 "num_base_bdevs_operational": 2, 00:15:04.756 "process": { 00:15:04.756 "type": "rebuild", 00:15:04.756 "target": "spare", 00:15:04.756 "progress": { 00:15:04.756 "blocks": 20480, 00:15:04.756 "percent": 32 00:15:04.756 } 00:15:04.756 }, 00:15:04.756 "base_bdevs_list": [ 00:15:04.756 { 00:15:04.756 "name": "spare", 00:15:04.756 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:04.756 "is_configured": true, 00:15:04.756 "data_offset": 2048, 00:15:04.756 "data_size": 63488 00:15:04.756 }, 00:15:04.756 { 00:15:04.756 "name": "BaseBdev2", 00:15:04.756 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:04.756 "is_configured": true, 00:15:04.756 "data_offset": 2048, 00:15:04.756 "data_size": 63488 00:15:04.756 } 00:15:04.756 ] 00:15:04.756 }' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:04.756 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.756 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.756 "name": "raid_bdev1", 00:15:04.756 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:04.756 "strip_size_kb": 0, 00:15:04.756 "state": "online", 00:15:04.757 "raid_level": "raid1", 00:15:04.757 "superblock": true, 00:15:04.757 "num_base_bdevs": 2, 00:15:04.757 "num_base_bdevs_discovered": 2, 00:15:04.757 "num_base_bdevs_operational": 2, 00:15:04.757 "process": { 00:15:04.757 "type": "rebuild", 00:15:04.757 "target": "spare", 00:15:04.757 "progress": { 00:15:04.757 "blocks": 22528, 00:15:04.757 "percent": 35 00:15:04.757 } 00:15:04.757 }, 00:15:04.757 "base_bdevs_list": [ 00:15:04.757 { 00:15:04.757 "name": "spare", 00:15:04.757 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:04.757 "is_configured": true, 00:15:04.757 "data_offset": 2048, 00:15:04.757 "data_size": 63488 00:15:04.757 }, 00:15:04.757 { 00:15:04.757 "name": "BaseBdev2", 00:15:04.757 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:04.757 "is_configured": true, 00:15:04.757 "data_offset": 2048, 00:15:04.757 "data_size": 63488 00:15:04.757 } 00:15:04.757 ] 00:15:04.757 }' 00:15:04.757 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.757 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.757 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.757 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.757 08:48:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.692 08:48:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.950 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.950 "name": "raid_bdev1", 00:15:05.950 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:05.950 "strip_size_kb": 0, 00:15:05.950 "state": "online", 00:15:05.950 "raid_level": "raid1", 00:15:05.950 "superblock": true, 00:15:05.950 "num_base_bdevs": 2, 00:15:05.950 "num_base_bdevs_discovered": 2, 00:15:05.950 "num_base_bdevs_operational": 2, 00:15:05.950 "process": { 00:15:05.950 "type": "rebuild", 00:15:05.950 "target": "spare", 00:15:05.950 "progress": { 00:15:05.950 "blocks": 45056, 00:15:05.950 "percent": 70 00:15:05.950 } 00:15:05.950 }, 00:15:05.950 "base_bdevs_list": [ 00:15:05.950 { 00:15:05.950 "name": "spare", 00:15:05.950 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:05.950 "is_configured": true, 00:15:05.950 "data_offset": 2048, 00:15:05.950 "data_size": 63488 00:15:05.950 }, 00:15:05.950 { 00:15:05.950 "name": "BaseBdev2", 00:15:05.950 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:05.950 "is_configured": true, 00:15:05.950 "data_offset": 2048, 00:15:05.950 "data_size": 63488 00:15:05.950 } 00:15:05.950 ] 00:15:05.950 }' 00:15:05.950 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.950 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.950 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.950 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.950 08:48:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.517 [2024-11-20 08:48:37.398423] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.517 [2024-11-20 08:48:37.398806] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.517 [2024-11-20 08:48:37.398981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.083 "name": "raid_bdev1", 00:15:07.083 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:07.083 "strip_size_kb": 0, 00:15:07.083 "state": "online", 00:15:07.083 "raid_level": "raid1", 00:15:07.083 "superblock": true, 00:15:07.083 "num_base_bdevs": 2, 00:15:07.083 "num_base_bdevs_discovered": 2, 00:15:07.083 "num_base_bdevs_operational": 2, 00:15:07.083 "base_bdevs_list": [ 00:15:07.083 { 00:15:07.083 "name": "spare", 00:15:07.083 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:07.083 "is_configured": true, 00:15:07.083 "data_offset": 2048, 00:15:07.083 "data_size": 63488 00:15:07.083 }, 00:15:07.083 { 00:15:07.083 "name": "BaseBdev2", 00:15:07.083 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:07.083 "is_configured": true, 00:15:07.083 "data_offset": 2048, 00:15:07.083 "data_size": 63488 00:15:07.083 } 00:15:07.083 ] 00:15:07.083 }' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.083 "name": "raid_bdev1", 00:15:07.083 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:07.083 "strip_size_kb": 0, 00:15:07.083 "state": "online", 00:15:07.083 "raid_level": "raid1", 00:15:07.083 "superblock": true, 00:15:07.083 "num_base_bdevs": 2, 00:15:07.083 "num_base_bdevs_discovered": 2, 00:15:07.083 "num_base_bdevs_operational": 2, 00:15:07.083 "base_bdevs_list": [ 00:15:07.083 { 00:15:07.083 "name": "spare", 00:15:07.083 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:07.083 "is_configured": true, 00:15:07.083 "data_offset": 2048, 00:15:07.083 "data_size": 63488 00:15:07.083 }, 00:15:07.083 { 00:15:07.083 "name": "BaseBdev2", 00:15:07.083 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:07.083 "is_configured": true, 00:15:07.083 "data_offset": 2048, 00:15:07.083 "data_size": 63488 00:15:07.083 } 00:15:07.083 ] 00:15:07.083 }' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.083 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.084 08:48:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.342 "name": "raid_bdev1", 00:15:07.342 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:07.342 "strip_size_kb": 0, 00:15:07.342 "state": "online", 00:15:07.342 "raid_level": "raid1", 00:15:07.342 "superblock": true, 00:15:07.342 "num_base_bdevs": 2, 00:15:07.342 "num_base_bdevs_discovered": 2, 00:15:07.342 "num_base_bdevs_operational": 2, 00:15:07.342 "base_bdevs_list": [ 00:15:07.342 { 00:15:07.342 "name": "spare", 00:15:07.342 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:07.342 "is_configured": true, 00:15:07.342 "data_offset": 2048, 00:15:07.342 "data_size": 63488 00:15:07.342 }, 00:15:07.342 { 00:15:07.342 "name": "BaseBdev2", 00:15:07.342 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:07.342 "is_configured": true, 00:15:07.342 "data_offset": 2048, 00:15:07.342 "data_size": 63488 00:15:07.342 } 00:15:07.342 ] 00:15:07.342 }' 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.342 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.601 [2024-11-20 08:48:38.470654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.601 [2024-11-20 08:48:38.470828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.601 [2024-11-20 08:48:38.470944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.601 [2024-11-20 08:48:38.471037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.601 [2024-11-20 08:48:38.471059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.601 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.860 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:08.118 /dev/nbd0 00:15:08.118 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.118 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.118 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:08.118 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:08.118 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.119 1+0 records in 00:15:08.119 1+0 records out 00:15:08.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558687 s, 7.3 MB/s 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.119 08:48:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:08.377 /dev/nbd1 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.377 1+0 records in 00:15:08.377 1+0 records out 00:15:08.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314173 s, 13.0 MB/s 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.377 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.636 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.896 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.155 08:48:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.155 [2024-11-20 08:48:40.015221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.155 [2024-11-20 08:48:40.015302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.155 [2024-11-20 08:48:40.015340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.155 [2024-11-20 08:48:40.015355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.155 [2024-11-20 08:48:40.018286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.155 [2024-11-20 08:48:40.018331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.155 [2024-11-20 08:48:40.018456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:09.155 [2024-11-20 08:48:40.018524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.155 [2024-11-20 08:48:40.018709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.155 spare 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.155 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.414 [2024-11-20 08:48:40.118858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:09.414 [2024-11-20 08:48:40.118935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.414 [2024-11-20 08:48:40.119425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:09.414 [2024-11-20 08:48:40.119686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:09.414 [2024-11-20 08:48:40.119703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:09.414 [2024-11-20 08:48:40.119958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.414 "name": "raid_bdev1", 00:15:09.414 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:09.414 "strip_size_kb": 0, 00:15:09.414 "state": "online", 00:15:09.414 "raid_level": "raid1", 00:15:09.414 "superblock": true, 00:15:09.414 "num_base_bdevs": 2, 00:15:09.414 "num_base_bdevs_discovered": 2, 00:15:09.414 "num_base_bdevs_operational": 2, 00:15:09.414 "base_bdevs_list": [ 00:15:09.414 { 00:15:09.414 "name": "spare", 00:15:09.414 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:09.414 "is_configured": true, 00:15:09.414 "data_offset": 2048, 00:15:09.414 "data_size": 63488 00:15:09.414 }, 00:15:09.414 { 00:15:09.414 "name": "BaseBdev2", 00:15:09.414 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:09.414 "is_configured": true, 00:15:09.414 "data_offset": 2048, 00:15:09.414 "data_size": 63488 00:15:09.414 } 00:15:09.414 ] 00:15:09.414 }' 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.414 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.983 "name": "raid_bdev1", 00:15:09.983 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:09.983 "strip_size_kb": 0, 00:15:09.983 "state": "online", 00:15:09.983 "raid_level": "raid1", 00:15:09.983 "superblock": true, 00:15:09.983 "num_base_bdevs": 2, 00:15:09.983 "num_base_bdevs_discovered": 2, 00:15:09.983 "num_base_bdevs_operational": 2, 00:15:09.983 "base_bdevs_list": [ 00:15:09.983 { 00:15:09.983 "name": "spare", 00:15:09.983 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:09.983 "is_configured": true, 00:15:09.983 "data_offset": 2048, 00:15:09.983 "data_size": 63488 00:15:09.983 }, 00:15:09.983 { 00:15:09.983 "name": "BaseBdev2", 00:15:09.983 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:09.983 "is_configured": true, 00:15:09.983 "data_offset": 2048, 00:15:09.983 "data_size": 63488 00:15:09.983 } 00:15:09.983 ] 00:15:09.983 }' 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.983 [2024-11-20 08:48:40.852119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.983 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.242 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.242 "name": "raid_bdev1", 00:15:10.242 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:10.242 "strip_size_kb": 0, 00:15:10.242 "state": "online", 00:15:10.242 "raid_level": "raid1", 00:15:10.242 "superblock": true, 00:15:10.242 "num_base_bdevs": 2, 00:15:10.242 "num_base_bdevs_discovered": 1, 00:15:10.242 "num_base_bdevs_operational": 1, 00:15:10.242 "base_bdevs_list": [ 00:15:10.242 { 00:15:10.242 "name": null, 00:15:10.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.242 "is_configured": false, 00:15:10.242 "data_offset": 0, 00:15:10.242 "data_size": 63488 00:15:10.242 }, 00:15:10.242 { 00:15:10.242 "name": "BaseBdev2", 00:15:10.242 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:10.242 "is_configured": true, 00:15:10.242 "data_offset": 2048, 00:15:10.242 "data_size": 63488 00:15:10.242 } 00:15:10.242 ] 00:15:10.242 }' 00:15:10.242 08:48:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.242 08:48:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 08:48:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.500 08:48:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.500 08:48:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 [2024-11-20 08:48:41.364325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.500 [2024-11-20 08:48:41.364696] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.500 [2024-11-20 08:48:41.364731] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.500 [2024-11-20 08:48:41.364785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.500 [2024-11-20 08:48:41.380402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:10.500 08:48:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.500 08:48:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:10.500 [2024-11-20 08:48:41.382837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.876 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.876 "name": "raid_bdev1", 00:15:11.876 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:11.876 "strip_size_kb": 0, 00:15:11.876 "state": "online", 00:15:11.876 "raid_level": "raid1", 00:15:11.876 "superblock": true, 00:15:11.876 "num_base_bdevs": 2, 00:15:11.876 "num_base_bdevs_discovered": 2, 00:15:11.876 "num_base_bdevs_operational": 2, 00:15:11.876 "process": { 00:15:11.876 "type": "rebuild", 00:15:11.876 "target": "spare", 00:15:11.876 "progress": { 00:15:11.876 "blocks": 20480, 00:15:11.876 "percent": 32 00:15:11.876 } 00:15:11.876 }, 00:15:11.876 "base_bdevs_list": [ 00:15:11.876 { 00:15:11.876 "name": "spare", 00:15:11.876 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:11.876 "is_configured": true, 00:15:11.876 "data_offset": 2048, 00:15:11.876 "data_size": 63488 00:15:11.876 }, 00:15:11.876 { 00:15:11.876 "name": "BaseBdev2", 00:15:11.876 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:11.876 "is_configured": true, 00:15:11.876 "data_offset": 2048, 00:15:11.876 "data_size": 63488 00:15:11.876 } 00:15:11.877 ] 00:15:11.877 }' 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.877 [2024-11-20 08:48:42.548186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.877 [2024-11-20 08:48:42.590883] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.877 [2024-11-20 08:48:42.590973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.877 [2024-11-20 08:48:42.590997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.877 [2024-11-20 08:48:42.591012] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.877 "name": "raid_bdev1", 00:15:11.877 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:11.877 "strip_size_kb": 0, 00:15:11.877 "state": "online", 00:15:11.877 "raid_level": "raid1", 00:15:11.877 "superblock": true, 00:15:11.877 "num_base_bdevs": 2, 00:15:11.877 "num_base_bdevs_discovered": 1, 00:15:11.877 "num_base_bdevs_operational": 1, 00:15:11.877 "base_bdevs_list": [ 00:15:11.877 { 00:15:11.877 "name": null, 00:15:11.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.877 "is_configured": false, 00:15:11.877 "data_offset": 0, 00:15:11.877 "data_size": 63488 00:15:11.877 }, 00:15:11.877 { 00:15:11.877 "name": "BaseBdev2", 00:15:11.877 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:11.877 "is_configured": true, 00:15:11.877 "data_offset": 2048, 00:15:11.877 "data_size": 63488 00:15:11.877 } 00:15:11.877 ] 00:15:11.877 }' 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.877 08:48:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 08:48:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.443 08:48:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.443 08:48:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 [2024-11-20 08:48:43.142577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.443 [2024-11-20 08:48:43.142656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.443 [2024-11-20 08:48:43.142687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:12.443 [2024-11-20 08:48:43.142705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.443 [2024-11-20 08:48:43.143323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.443 [2024-11-20 08:48:43.143362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.443 [2024-11-20 08:48:43.143477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.443 [2024-11-20 08:48:43.143501] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:12.443 [2024-11-20 08:48:43.143515] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:12.443 [2024-11-20 08:48:43.143551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.443 [2024-11-20 08:48:43.158877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:12.443 spare 00:15:12.443 08:48:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.443 08:48:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.443 [2024-11-20 08:48:43.161315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.377 "name": "raid_bdev1", 00:15:13.377 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:13.377 "strip_size_kb": 0, 00:15:13.377 "state": "online", 00:15:13.377 "raid_level": "raid1", 00:15:13.377 "superblock": true, 00:15:13.377 "num_base_bdevs": 2, 00:15:13.377 "num_base_bdevs_discovered": 2, 00:15:13.377 "num_base_bdevs_operational": 2, 00:15:13.377 "process": { 00:15:13.377 "type": "rebuild", 00:15:13.377 "target": "spare", 00:15:13.377 "progress": { 00:15:13.377 "blocks": 20480, 00:15:13.377 "percent": 32 00:15:13.377 } 00:15:13.377 }, 00:15:13.377 "base_bdevs_list": [ 00:15:13.377 { 00:15:13.377 "name": "spare", 00:15:13.377 "uuid": "2ad44e35-d4aa-54c8-b45f-0fc64f2cb62b", 00:15:13.377 "is_configured": true, 00:15:13.377 "data_offset": 2048, 00:15:13.377 "data_size": 63488 00:15:13.377 }, 00:15:13.377 { 00:15:13.377 "name": "BaseBdev2", 00:15:13.377 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:13.377 "is_configured": true, 00:15:13.377 "data_offset": 2048, 00:15:13.377 "data_size": 63488 00:15:13.377 } 00:15:13.377 ] 00:15:13.377 }' 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.377 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.636 [2024-11-20 08:48:44.322323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.636 [2024-11-20 08:48:44.369169] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.636 [2024-11-20 08:48:44.369406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.636 [2024-11-20 08:48:44.369644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.636 [2024-11-20 08:48:44.369779] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.636 "name": "raid_bdev1", 00:15:13.636 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:13.636 "strip_size_kb": 0, 00:15:13.636 "state": "online", 00:15:13.636 "raid_level": "raid1", 00:15:13.636 "superblock": true, 00:15:13.636 "num_base_bdevs": 2, 00:15:13.636 "num_base_bdevs_discovered": 1, 00:15:13.636 "num_base_bdevs_operational": 1, 00:15:13.636 "base_bdevs_list": [ 00:15:13.636 { 00:15:13.636 "name": null, 00:15:13.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.636 "is_configured": false, 00:15:13.636 "data_offset": 0, 00:15:13.636 "data_size": 63488 00:15:13.636 }, 00:15:13.636 { 00:15:13.636 "name": "BaseBdev2", 00:15:13.636 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:13.636 "is_configured": true, 00:15:13.636 "data_offset": 2048, 00:15:13.636 "data_size": 63488 00:15:13.636 } 00:15:13.636 ] 00:15:13.636 }' 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.636 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.203 "name": "raid_bdev1", 00:15:14.203 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:14.203 "strip_size_kb": 0, 00:15:14.203 "state": "online", 00:15:14.203 "raid_level": "raid1", 00:15:14.203 "superblock": true, 00:15:14.203 "num_base_bdevs": 2, 00:15:14.203 "num_base_bdevs_discovered": 1, 00:15:14.203 "num_base_bdevs_operational": 1, 00:15:14.203 "base_bdevs_list": [ 00:15:14.203 { 00:15:14.203 "name": null, 00:15:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.203 "is_configured": false, 00:15:14.203 "data_offset": 0, 00:15:14.203 "data_size": 63488 00:15:14.203 }, 00:15:14.203 { 00:15:14.203 "name": "BaseBdev2", 00:15:14.203 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:14.203 "is_configured": true, 00:15:14.203 "data_offset": 2048, 00:15:14.203 "data_size": 63488 00:15:14.203 } 00:15:14.203 ] 00:15:14.203 }' 00:15:14.203 08:48:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.203 [2024-11-20 08:48:45.085266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.203 [2024-11-20 08:48:45.085327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.203 [2024-11-20 08:48:45.085359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:14.203 [2024-11-20 08:48:45.085385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.203 [2024-11-20 08:48:45.085931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.203 [2024-11-20 08:48:45.085962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.203 [2024-11-20 08:48:45.086062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:14.203 [2024-11-20 08:48:45.086083] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.203 [2024-11-20 08:48:45.086097] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.203 [2024-11-20 08:48:45.086109] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:14.203 BaseBdev1 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.203 08:48:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:15.581 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:15.581 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.581 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.582 "name": "raid_bdev1", 00:15:15.582 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:15.582 "strip_size_kb": 0, 00:15:15.582 "state": "online", 00:15:15.582 "raid_level": "raid1", 00:15:15.582 "superblock": true, 00:15:15.582 "num_base_bdevs": 2, 00:15:15.582 "num_base_bdevs_discovered": 1, 00:15:15.582 "num_base_bdevs_operational": 1, 00:15:15.582 "base_bdevs_list": [ 00:15:15.582 { 00:15:15.582 "name": null, 00:15:15.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.582 "is_configured": false, 00:15:15.582 "data_offset": 0, 00:15:15.582 "data_size": 63488 00:15:15.582 }, 00:15:15.582 { 00:15:15.582 "name": "BaseBdev2", 00:15:15.582 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:15.582 "is_configured": true, 00:15:15.582 "data_offset": 2048, 00:15:15.582 "data_size": 63488 00:15:15.582 } 00:15:15.582 ] 00:15:15.582 }' 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.582 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.841 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.841 "name": "raid_bdev1", 00:15:15.841 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:15.841 "strip_size_kb": 0, 00:15:15.841 "state": "online", 00:15:15.841 "raid_level": "raid1", 00:15:15.841 "superblock": true, 00:15:15.841 "num_base_bdevs": 2, 00:15:15.842 "num_base_bdevs_discovered": 1, 00:15:15.842 "num_base_bdevs_operational": 1, 00:15:15.842 "base_bdevs_list": [ 00:15:15.842 { 00:15:15.842 "name": null, 00:15:15.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.842 "is_configured": false, 00:15:15.842 "data_offset": 0, 00:15:15.842 "data_size": 63488 00:15:15.842 }, 00:15:15.842 { 00:15:15.842 "name": "BaseBdev2", 00:15:15.842 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:15.842 "is_configured": true, 00:15:15.842 "data_offset": 2048, 00:15:15.842 "data_size": 63488 00:15:15.842 } 00:15:15.842 ] 00:15:15.842 }' 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.842 [2024-11-20 08:48:46.745784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.842 [2024-11-20 08:48:46.745990] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.842 [2024-11-20 08:48:46.746018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:15.842 request: 00:15:15.842 { 00:15:15.842 "base_bdev": "BaseBdev1", 00:15:15.842 "raid_bdev": "raid_bdev1", 00:15:15.842 "method": "bdev_raid_add_base_bdev", 00:15:15.842 "req_id": 1 00:15:15.842 } 00:15:15.842 Got JSON-RPC error response 00:15:15.842 response: 00:15:15.842 { 00:15:15.842 "code": -22, 00:15:15.842 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:15.842 } 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.842 08:48:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.221 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.221 "name": "raid_bdev1", 00:15:17.221 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:17.221 "strip_size_kb": 0, 00:15:17.221 "state": "online", 00:15:17.221 "raid_level": "raid1", 00:15:17.221 "superblock": true, 00:15:17.221 "num_base_bdevs": 2, 00:15:17.221 "num_base_bdevs_discovered": 1, 00:15:17.221 "num_base_bdevs_operational": 1, 00:15:17.221 "base_bdevs_list": [ 00:15:17.221 { 00:15:17.221 "name": null, 00:15:17.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.222 "is_configured": false, 00:15:17.222 "data_offset": 0, 00:15:17.222 "data_size": 63488 00:15:17.222 }, 00:15:17.222 { 00:15:17.222 "name": "BaseBdev2", 00:15:17.222 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:17.222 "is_configured": true, 00:15:17.222 "data_offset": 2048, 00:15:17.222 "data_size": 63488 00:15:17.222 } 00:15:17.222 ] 00:15:17.222 }' 00:15:17.222 08:48:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.222 08:48:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.481 "name": "raid_bdev1", 00:15:17.481 "uuid": "61e4cebe-2698-41f2-856b-8d35d9cad236", 00:15:17.481 "strip_size_kb": 0, 00:15:17.481 "state": "online", 00:15:17.481 "raid_level": "raid1", 00:15:17.481 "superblock": true, 00:15:17.481 "num_base_bdevs": 2, 00:15:17.481 "num_base_bdevs_discovered": 1, 00:15:17.481 "num_base_bdevs_operational": 1, 00:15:17.481 "base_bdevs_list": [ 00:15:17.481 { 00:15:17.481 "name": null, 00:15:17.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.481 "is_configured": false, 00:15:17.481 "data_offset": 0, 00:15:17.481 "data_size": 63488 00:15:17.481 }, 00:15:17.481 { 00:15:17.481 "name": "BaseBdev2", 00:15:17.481 "uuid": "03662df8-3560-5eb9-a8b4-509893407054", 00:15:17.481 "is_configured": true, 00:15:17.481 "data_offset": 2048, 00:15:17.481 "data_size": 63488 00:15:17.481 } 00:15:17.481 ] 00:15:17.481 }' 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.481 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75923 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75923 ']' 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75923 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75923 00:15:17.745 killing process with pid 75923 00:15:17.745 Received shutdown signal, test time was about 60.000000 seconds 00:15:17.745 00:15:17.745 Latency(us) 00:15:17.745 [2024-11-20T08:48:48.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.745 [2024-11-20T08:48:48.661Z] =================================================================================================================== 00:15:17.745 [2024-11-20T08:48:48.661Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75923' 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75923 00:15:17.745 [2024-11-20 08:48:48.432042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.745 08:48:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75923 00:15:17.745 [2024-11-20 08:48:48.432230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.745 [2024-11-20 08:48:48.432299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.745 [2024-11-20 08:48:48.432321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:18.005 [2024-11-20 08:48:48.691817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:18.940 00:15:18.940 real 0m26.557s 00:15:18.940 user 0m33.092s 00:15:18.940 sys 0m3.726s 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.940 ************************************ 00:15:18.940 END TEST raid_rebuild_test_sb 00:15:18.940 ************************************ 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.940 08:48:49 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:18.940 08:48:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:18.940 08:48:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.940 08:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.940 ************************************ 00:15:18.940 START TEST raid_rebuild_test_io 00:15:18.940 ************************************ 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:18.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76696 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76696 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76696 ']' 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.940 08:48:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.198 [2024-11-20 08:48:49.880711] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:19.198 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:19.198 Zero copy mechanism will not be used. 00:15:19.198 [2024-11-20 08:48:49.881682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76696 ] 00:15:19.198 [2024-11-20 08:48:50.098374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.457 [2024-11-20 08:48:50.229748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.716 [2024-11-20 08:48:50.431406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.716 [2024-11-20 08:48:50.431492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.974 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.974 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:19.974 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:19.974 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:19.974 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.974 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.233 BaseBdev1_malloc 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.233 [2024-11-20 08:48:50.898102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.233 [2024-11-20 08:48:50.898209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.233 [2024-11-20 08:48:50.898244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.233 [2024-11-20 08:48:50.898262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.233 [2024-11-20 08:48:50.901049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.233 [2024-11-20 08:48:50.901270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.233 BaseBdev1 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.233 BaseBdev2_malloc 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.233 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 [2024-11-20 08:48:50.954027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:20.234 [2024-11-20 08:48:50.954113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.234 [2024-11-20 08:48:50.954140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.234 [2024-11-20 08:48:50.954188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.234 [2024-11-20 08:48:50.957029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.234 [2024-11-20 08:48:50.957246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.234 BaseBdev2 00:15:20.234 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.234 08:48:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:20.234 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.234 08:48:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 spare_malloc 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 spare_delay 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 [2024-11-20 08:48:51.028974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.234 [2024-11-20 08:48:51.029200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.234 [2024-11-20 08:48:51.029240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:20.234 [2024-11-20 08:48:51.029260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.234 [2024-11-20 08:48:51.032041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.234 [2024-11-20 08:48:51.032093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.234 spare 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 [2024-11-20 08:48:51.037061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.234 [2024-11-20 08:48:51.039617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.234 [2024-11-20 08:48:51.039893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:20.234 [2024-11-20 08:48:51.040017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:20.234 [2024-11-20 08:48:51.040399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:20.234 [2024-11-20 08:48:51.040741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:20.234 [2024-11-20 08:48:51.040882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:20.234 [2024-11-20 08:48:51.041336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.234 "name": "raid_bdev1", 00:15:20.234 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:20.234 "strip_size_kb": 0, 00:15:20.234 "state": "online", 00:15:20.234 "raid_level": "raid1", 00:15:20.234 "superblock": false, 00:15:20.234 "num_base_bdevs": 2, 00:15:20.234 "num_base_bdevs_discovered": 2, 00:15:20.234 "num_base_bdevs_operational": 2, 00:15:20.234 "base_bdevs_list": [ 00:15:20.234 { 00:15:20.234 "name": "BaseBdev1", 00:15:20.234 "uuid": "c51f26b9-a4a5-5cd5-b581-4afc9e99bd9b", 00:15:20.234 "is_configured": true, 00:15:20.234 "data_offset": 0, 00:15:20.234 "data_size": 65536 00:15:20.234 }, 00:15:20.234 { 00:15:20.234 "name": "BaseBdev2", 00:15:20.234 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:20.234 "is_configured": true, 00:15:20.234 "data_offset": 0, 00:15:20.234 "data_size": 65536 00:15:20.234 } 00:15:20.234 ] 00:15:20.234 }' 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.234 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.801 [2024-11-20 08:48:51.557859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.801 [2024-11-20 08:48:51.657496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.801 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.802 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.060 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.060 "name": "raid_bdev1", 00:15:21.060 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:21.060 "strip_size_kb": 0, 00:15:21.060 "state": "online", 00:15:21.060 "raid_level": "raid1", 00:15:21.060 "superblock": false, 00:15:21.060 "num_base_bdevs": 2, 00:15:21.060 "num_base_bdevs_discovered": 1, 00:15:21.060 "num_base_bdevs_operational": 1, 00:15:21.060 "base_bdevs_list": [ 00:15:21.060 { 00:15:21.060 "name": null, 00:15:21.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.060 "is_configured": false, 00:15:21.060 "data_offset": 0, 00:15:21.060 "data_size": 65536 00:15:21.060 }, 00:15:21.060 { 00:15:21.060 "name": "BaseBdev2", 00:15:21.060 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:21.060 "is_configured": true, 00:15:21.060 "data_offset": 0, 00:15:21.060 "data_size": 65536 00:15:21.060 } 00:15:21.060 ] 00:15:21.060 }' 00:15:21.060 08:48:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.060 08:48:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.060 [2024-11-20 08:48:51.785519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:21.060 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:21.060 Zero copy mechanism will not be used. 00:15:21.060 Running I/O for 60 seconds... 00:15:21.319 08:48:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.319 08:48:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.319 08:48:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.319 [2024-11-20 08:48:52.180989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.319 08:48:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.319 08:48:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.319 [2024-11-20 08:48:52.223997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:21.319 [2024-11-20 08:48:52.226545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.577 [2024-11-20 08:48:52.343216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:21.578 [2024-11-20 08:48:52.343881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:21.836 [2024-11-20 08:48:52.561873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:21.836 [2024-11-20 08:48:52.562162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:22.095 165.00 IOPS, 495.00 MiB/s [2024-11-20T08:48:53.011Z] [2024-11-20 08:48:52.947932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:22.354 [2024-11-20 08:48:53.165701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:22.354 [2024-11-20 08:48:53.166108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.354 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.612 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.612 "name": "raid_bdev1", 00:15:22.612 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:22.612 "strip_size_kb": 0, 00:15:22.612 "state": "online", 00:15:22.612 "raid_level": "raid1", 00:15:22.612 "superblock": false, 00:15:22.612 "num_base_bdevs": 2, 00:15:22.612 "num_base_bdevs_discovered": 2, 00:15:22.612 "num_base_bdevs_operational": 2, 00:15:22.612 "process": { 00:15:22.612 "type": "rebuild", 00:15:22.612 "target": "spare", 00:15:22.612 "progress": { 00:15:22.612 "blocks": 10240, 00:15:22.612 "percent": 15 00:15:22.612 } 00:15:22.612 }, 00:15:22.612 "base_bdevs_list": [ 00:15:22.612 { 00:15:22.612 "name": "spare", 00:15:22.612 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:22.612 "is_configured": true, 00:15:22.612 "data_offset": 0, 00:15:22.612 "data_size": 65536 00:15:22.612 }, 00:15:22.612 { 00:15:22.612 "name": "BaseBdev2", 00:15:22.612 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:22.612 "is_configured": true, 00:15:22.612 "data_offset": 0, 00:15:22.612 "data_size": 65536 00:15:22.612 } 00:15:22.612 ] 00:15:22.612 }' 00:15:22.612 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.612 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.612 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.613 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.613 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.613 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.613 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.613 [2024-11-20 08:48:53.401074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.613 [2024-11-20 08:48:53.501579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:22.871 [2024-11-20 08:48:53.595570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.871 [2024-11-20 08:48:53.606250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.871 [2024-11-20 08:48:53.606481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.871 [2024-11-20 08:48:53.606518] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.871 [2024-11-20 08:48:53.642521] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.871 "name": "raid_bdev1", 00:15:22.871 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:22.871 "strip_size_kb": 0, 00:15:22.871 "state": "online", 00:15:22.871 "raid_level": "raid1", 00:15:22.871 "superblock": false, 00:15:22.871 "num_base_bdevs": 2, 00:15:22.871 "num_base_bdevs_discovered": 1, 00:15:22.871 "num_base_bdevs_operational": 1, 00:15:22.871 "base_bdevs_list": [ 00:15:22.871 { 00:15:22.871 "name": null, 00:15:22.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.871 "is_configured": false, 00:15:22.871 "data_offset": 0, 00:15:22.871 "data_size": 65536 00:15:22.871 }, 00:15:22.871 { 00:15:22.871 "name": "BaseBdev2", 00:15:22.871 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:22.871 "is_configured": true, 00:15:22.871 "data_offset": 0, 00:15:22.871 "data_size": 65536 00:15:22.871 } 00:15:22.871 ] 00:15:22.871 }' 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.871 08:48:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.389 124.50 IOPS, 373.50 MiB/s [2024-11-20T08:48:54.305Z] 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.389 "name": "raid_bdev1", 00:15:23.389 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:23.389 "strip_size_kb": 0, 00:15:23.389 "state": "online", 00:15:23.389 "raid_level": "raid1", 00:15:23.389 "superblock": false, 00:15:23.389 "num_base_bdevs": 2, 00:15:23.389 "num_base_bdevs_discovered": 1, 00:15:23.389 "num_base_bdevs_operational": 1, 00:15:23.389 "base_bdevs_list": [ 00:15:23.389 { 00:15:23.389 "name": null, 00:15:23.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.389 "is_configured": false, 00:15:23.389 "data_offset": 0, 00:15:23.389 "data_size": 65536 00:15:23.389 }, 00:15:23.389 { 00:15:23.389 "name": "BaseBdev2", 00:15:23.389 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:23.389 "is_configured": true, 00:15:23.389 "data_offset": 0, 00:15:23.389 "data_size": 65536 00:15:23.389 } 00:15:23.389 ] 00:15:23.389 }' 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.389 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.648 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.648 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.648 08:48:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.648 08:48:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.648 [2024-11-20 08:48:54.340847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.648 08:48:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.648 08:48:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:23.648 [2024-11-20 08:48:54.426328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:23.648 [2024-11-20 08:48:54.429066] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.648 [2024-11-20 08:48:54.546402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:23.648 [2024-11-20 08:48:54.547017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:23.909 [2024-11-20 08:48:54.761409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:23.909 [2024-11-20 08:48:54.761798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:24.479 139.00 IOPS, 417.00 MiB/s [2024-11-20T08:48:55.395Z] [2024-11-20 08:48:55.112679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:24.479 [2024-11-20 08:48:55.338669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.738 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.738 "name": "raid_bdev1", 00:15:24.738 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:24.738 "strip_size_kb": 0, 00:15:24.738 "state": "online", 00:15:24.738 "raid_level": "raid1", 00:15:24.738 "superblock": false, 00:15:24.738 "num_base_bdevs": 2, 00:15:24.738 "num_base_bdevs_discovered": 2, 00:15:24.738 "num_base_bdevs_operational": 2, 00:15:24.738 "process": { 00:15:24.738 "type": "rebuild", 00:15:24.738 "target": "spare", 00:15:24.738 "progress": { 00:15:24.738 "blocks": 10240, 00:15:24.738 "percent": 15 00:15:24.738 } 00:15:24.738 }, 00:15:24.738 "base_bdevs_list": [ 00:15:24.738 { 00:15:24.739 "name": "spare", 00:15:24.739 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:24.739 "is_configured": true, 00:15:24.739 "data_offset": 0, 00:15:24.739 "data_size": 65536 00:15:24.739 }, 00:15:24.739 { 00:15:24.739 "name": "BaseBdev2", 00:15:24.739 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:24.739 "is_configured": true, 00:15:24.739 "data_offset": 0, 00:15:24.739 "data_size": 65536 00:15:24.739 } 00:15:24.739 ] 00:15:24.739 }' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.739 [2024-11-20 08:48:55.584082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.739 "name": "raid_bdev1", 00:15:24.739 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:24.739 "strip_size_kb": 0, 00:15:24.739 "state": "online", 00:15:24.739 "raid_level": "raid1", 00:15:24.739 "superblock": false, 00:15:24.739 "num_base_bdevs": 2, 00:15:24.739 "num_base_bdevs_discovered": 2, 00:15:24.739 "num_base_bdevs_operational": 2, 00:15:24.739 "process": { 00:15:24.739 "type": "rebuild", 00:15:24.739 "target": "spare", 00:15:24.739 "progress": { 00:15:24.739 "blocks": 12288, 00:15:24.739 "percent": 18 00:15:24.739 } 00:15:24.739 }, 00:15:24.739 "base_bdevs_list": [ 00:15:24.739 { 00:15:24.739 "name": "spare", 00:15:24.739 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:24.739 "is_configured": true, 00:15:24.739 "data_offset": 0, 00:15:24.739 "data_size": 65536 00:15:24.739 }, 00:15:24.739 { 00:15:24.739 "name": "BaseBdev2", 00:15:24.739 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:24.739 "is_configured": true, 00:15:24.739 "data_offset": 0, 00:15:24.739 "data_size": 65536 00:15:24.739 } 00:15:24.739 ] 00:15:24.739 }' 00:15:24.739 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.997 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.997 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.997 [2024-11-20 08:48:55.705735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:24.997 [2024-11-20 08:48:55.706210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:24.997 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.997 08:48:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.256 122.00 IOPS, 366.00 MiB/s [2024-11-20T08:48:56.172Z] [2024-11-20 08:48:56.078048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:25.256 [2024-11-20 08:48:56.078533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.193 108.00 IOPS, 324.00 MiB/s [2024-11-20T08:48:57.109Z] 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.193 "name": "raid_bdev1", 00:15:26.193 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:26.193 "strip_size_kb": 0, 00:15:26.193 "state": "online", 00:15:26.193 "raid_level": "raid1", 00:15:26.193 "superblock": false, 00:15:26.193 "num_base_bdevs": 2, 00:15:26.193 "num_base_bdevs_discovered": 2, 00:15:26.193 "num_base_bdevs_operational": 2, 00:15:26.193 "process": { 00:15:26.193 "type": "rebuild", 00:15:26.193 "target": "spare", 00:15:26.193 "progress": { 00:15:26.193 "blocks": 28672, 00:15:26.193 "percent": 43 00:15:26.193 } 00:15:26.193 }, 00:15:26.193 "base_bdevs_list": [ 00:15:26.193 { 00:15:26.193 "name": "spare", 00:15:26.193 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:26.193 "is_configured": true, 00:15:26.193 "data_offset": 0, 00:15:26.193 "data_size": 65536 00:15:26.193 }, 00:15:26.193 { 00:15:26.193 "name": "BaseBdev2", 00:15:26.193 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:26.193 "is_configured": true, 00:15:26.193 "data_offset": 0, 00:15:26.193 "data_size": 65536 00:15:26.193 } 00:15:26.193 ] 00:15:26.193 }' 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.193 08:48:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.193 [2024-11-20 08:48:56.980224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:26.451 [2024-11-20 08:48:57.217168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:27.018 96.83 IOPS, 290.50 MiB/s [2024-11-20T08:48:57.934Z] [2024-11-20 08:48:57.804393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.018 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.278 08:48:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.278 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.278 "name": "raid_bdev1", 00:15:27.278 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:27.278 "strip_size_kb": 0, 00:15:27.278 "state": "online", 00:15:27.278 "raid_level": "raid1", 00:15:27.278 "superblock": false, 00:15:27.278 "num_base_bdevs": 2, 00:15:27.278 "num_base_bdevs_discovered": 2, 00:15:27.278 "num_base_bdevs_operational": 2, 00:15:27.278 "process": { 00:15:27.278 "type": "rebuild", 00:15:27.278 "target": "spare", 00:15:27.278 "progress": { 00:15:27.278 "blocks": 45056, 00:15:27.278 "percent": 68 00:15:27.278 } 00:15:27.278 }, 00:15:27.278 "base_bdevs_list": [ 00:15:27.278 { 00:15:27.278 "name": "spare", 00:15:27.278 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:27.278 "is_configured": true, 00:15:27.278 "data_offset": 0, 00:15:27.278 "data_size": 65536 00:15:27.278 }, 00:15:27.278 { 00:15:27.278 "name": "BaseBdev2", 00:15:27.278 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:27.278 "is_configured": true, 00:15:27.278 "data_offset": 0, 00:15:27.278 "data_size": 65536 00:15:27.278 } 00:15:27.278 ] 00:15:27.278 }' 00:15:27.278 08:48:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.278 08:48:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.278 [2024-11-20 08:48:58.028045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:27.278 08:48:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.278 08:48:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.278 08:48:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.536 [2024-11-20 08:48:58.361094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:28.103 87.86 IOPS, 263.57 MiB/s [2024-11-20T08:48:59.019Z] [2024-11-20 08:48:58.816515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:28.103 [2024-11-20 08:48:58.938758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.362 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.362 "name": "raid_bdev1", 00:15:28.362 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:28.362 "strip_size_kb": 0, 00:15:28.362 "state": "online", 00:15:28.362 "raid_level": "raid1", 00:15:28.362 "superblock": false, 00:15:28.362 "num_base_bdevs": 2, 00:15:28.362 "num_base_bdevs_discovered": 2, 00:15:28.362 "num_base_bdevs_operational": 2, 00:15:28.362 "process": { 00:15:28.362 "type": "rebuild", 00:15:28.362 "target": "spare", 00:15:28.362 "progress": { 00:15:28.362 "blocks": 59392, 00:15:28.362 "percent": 90 00:15:28.362 } 00:15:28.362 }, 00:15:28.362 "base_bdevs_list": [ 00:15:28.362 { 00:15:28.362 "name": "spare", 00:15:28.362 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:28.362 "is_configured": true, 00:15:28.362 "data_offset": 0, 00:15:28.362 "data_size": 65536 00:15:28.362 }, 00:15:28.362 { 00:15:28.363 "name": "BaseBdev2", 00:15:28.363 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:28.363 "is_configured": true, 00:15:28.363 "data_offset": 0, 00:15:28.363 "data_size": 65536 00:15:28.363 } 00:15:28.363 ] 00:15:28.363 }' 00:15:28.363 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.363 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.363 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.363 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.363 08:48:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.621 [2024-11-20 08:48:59.379864] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:28.621 [2024-11-20 08:48:59.479853] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:28.621 [2024-11-20 08:48:59.482023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.446 80.62 IOPS, 241.88 MiB/s [2024-11-20T08:49:00.362Z] 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.446 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.446 "name": "raid_bdev1", 00:15:29.446 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:29.446 "strip_size_kb": 0, 00:15:29.446 "state": "online", 00:15:29.446 "raid_level": "raid1", 00:15:29.446 "superblock": false, 00:15:29.446 "num_base_bdevs": 2, 00:15:29.446 "num_base_bdevs_discovered": 2, 00:15:29.446 "num_base_bdevs_operational": 2, 00:15:29.446 "base_bdevs_list": [ 00:15:29.446 { 00:15:29.446 "name": "spare", 00:15:29.446 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:29.446 "is_configured": true, 00:15:29.446 "data_offset": 0, 00:15:29.446 "data_size": 65536 00:15:29.446 }, 00:15:29.446 { 00:15:29.446 "name": "BaseBdev2", 00:15:29.446 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:29.446 "is_configured": true, 00:15:29.447 "data_offset": 0, 00:15:29.447 "data_size": 65536 00:15:29.447 } 00:15:29.447 ] 00:15:29.447 }' 00:15:29.447 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.705 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.705 "name": "raid_bdev1", 00:15:29.705 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:29.705 "strip_size_kb": 0, 00:15:29.705 "state": "online", 00:15:29.705 "raid_level": "raid1", 00:15:29.705 "superblock": false, 00:15:29.705 "num_base_bdevs": 2, 00:15:29.705 "num_base_bdevs_discovered": 2, 00:15:29.705 "num_base_bdevs_operational": 2, 00:15:29.705 "base_bdevs_list": [ 00:15:29.705 { 00:15:29.705 "name": "spare", 00:15:29.705 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:29.705 "is_configured": true, 00:15:29.705 "data_offset": 0, 00:15:29.705 "data_size": 65536 00:15:29.705 }, 00:15:29.705 { 00:15:29.705 "name": "BaseBdev2", 00:15:29.705 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:29.705 "is_configured": true, 00:15:29.705 "data_offset": 0, 00:15:29.705 "data_size": 65536 00:15:29.705 } 00:15:29.705 ] 00:15:29.705 }' 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.706 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.964 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.964 "name": "raid_bdev1", 00:15:29.964 "uuid": "5f458fa9-e232-4efc-9305-18f8fec20f30", 00:15:29.964 "strip_size_kb": 0, 00:15:29.964 "state": "online", 00:15:29.964 "raid_level": "raid1", 00:15:29.964 "superblock": false, 00:15:29.964 "num_base_bdevs": 2, 00:15:29.964 "num_base_bdevs_discovered": 2, 00:15:29.964 "num_base_bdevs_operational": 2, 00:15:29.964 "base_bdevs_list": [ 00:15:29.964 { 00:15:29.964 "name": "spare", 00:15:29.964 "uuid": "4b2ed765-a7fa-520f-b470-9ec8ba573c62", 00:15:29.964 "is_configured": true, 00:15:29.964 "data_offset": 0, 00:15:29.964 "data_size": 65536 00:15:29.964 }, 00:15:29.964 { 00:15:29.964 "name": "BaseBdev2", 00:15:29.964 "uuid": "19bacf18-79ff-55d8-92e8-ee798a8dbca2", 00:15:29.964 "is_configured": true, 00:15:29.964 "data_offset": 0, 00:15:29.964 "data_size": 65536 00:15:29.964 } 00:15:29.964 ] 00:15:29.964 }' 00:15:29.964 08:49:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.965 08:49:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.223 75.67 IOPS, 227.00 MiB/s [2024-11-20T08:49:01.139Z] 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.223 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.223 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.223 [2024-11-20 08:49:01.104996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.223 [2024-11-20 08:49:01.105032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.482 00:15:30.482 Latency(us) 00:15:30.482 [2024-11-20T08:49:01.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.482 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:30.482 raid_bdev1 : 9.36 74.07 222.20 0.00 0.00 18463.80 284.86 121062.87 00:15:30.482 [2024-11-20T08:49:01.398Z] =================================================================================================================== 00:15:30.482 [2024-11-20T08:49:01.398Z] Total : 74.07 222.20 0.00 0.00 18463.80 284.86 121062.87 00:15:30.482 [2024-11-20 08:49:01.164218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.482 [2024-11-20 08:49:01.164269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.482 { 00:15:30.482 "results": [ 00:15:30.482 { 00:15:30.482 "job": "raid_bdev1", 00:15:30.482 "core_mask": "0x1", 00:15:30.482 "workload": "randrw", 00:15:30.482 "percentage": 50, 00:15:30.482 "status": "finished", 00:15:30.482 "queue_depth": 2, 00:15:30.482 "io_size": 3145728, 00:15:30.482 "runtime": 9.356557, 00:15:30.482 "iops": 74.06570600702801, 00:15:30.482 "mibps": 222.19711802108404, 00:15:30.482 "io_failed": 0, 00:15:30.482 "io_timeout": 0, 00:15:30.482 "avg_latency_us": 18463.800015741832, 00:15:30.482 "min_latency_us": 284.85818181818183, 00:15:30.482 "max_latency_us": 121062.86545454545 00:15:30.482 } 00:15:30.482 ], 00:15:30.482 "core_count": 1 00:15:30.482 } 00:15:30.482 [2024-11-20 08:49:01.164370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.482 [2024-11-20 08:49:01.164386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:30.482 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:30.741 /dev/nbd0 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.741 1+0 records in 00:15:30.741 1+0 records out 00:15:30.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026549 s, 15.4 MB/s 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:30.741 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:31.001 /dev/nbd1 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.001 1+0 records in 00:15:31.001 1+0 records out 00:15:31.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416628 s, 9.8 MB/s 00:15:31.001 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.259 08:49:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.259 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.519 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76696 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76696 ']' 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76696 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76696 00:15:31.777 killing process with pid 76696 00:15:31.777 Received shutdown signal, test time was about 10.858186 seconds 00:15:31.777 00:15:31.777 Latency(us) 00:15:31.777 [2024-11-20T08:49:02.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.777 [2024-11-20T08:49:02.693Z] =================================================================================================================== 00:15:31.777 [2024-11-20T08:49:02.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76696' 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76696 00:15:31.777 [2024-11-20 08:49:02.646546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.777 08:49:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76696 00:15:32.036 [2024-11-20 08:49:02.846149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:33.413 00:15:33.413 real 0m14.131s 00:15:33.413 user 0m18.288s 00:15:33.413 sys 0m1.447s 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.413 ************************************ 00:15:33.413 END TEST raid_rebuild_test_io 00:15:33.413 ************************************ 00:15:33.413 08:49:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:33.413 08:49:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:33.413 08:49:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.413 08:49:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.413 ************************************ 00:15:33.413 START TEST raid_rebuild_test_sb_io 00:15:33.413 ************************************ 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:33.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77096 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77096 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77096 ']' 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.413 08:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.413 [2024-11-20 08:49:04.049821] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:33.413 [2024-11-20 08:49:04.050214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77096 ] 00:15:33.413 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:33.413 Zero copy mechanism will not be used. 00:15:33.413 [2024-11-20 08:49:04.226982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.672 [2024-11-20 08:49:04.350685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.672 [2024-11-20 08:49:04.544856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.672 [2024-11-20 08:49:04.545112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.239 BaseBdev1_malloc 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.239 [2024-11-20 08:49:05.086289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:34.239 [2024-11-20 08:49:05.086488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.239 [2024-11-20 08:49:05.086640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:34.239 [2024-11-20 08:49:05.086765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.239 [2024-11-20 08:49:05.089650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.239 [2024-11-20 08:49:05.089714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:34.239 BaseBdev1 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.239 BaseBdev2_malloc 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.239 [2024-11-20 08:49:05.132336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:34.239 [2024-11-20 08:49:05.132402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.239 [2024-11-20 08:49:05.132430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:34.239 [2024-11-20 08:49:05.132450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.239 [2024-11-20 08:49:05.135247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.239 [2024-11-20 08:49:05.135315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:34.239 BaseBdev2 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.239 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.498 spare_malloc 00:15:34.498 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.498 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:34.498 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.498 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.498 spare_delay 00:15:34.498 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.498 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.499 [2024-11-20 08:49:05.204940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:34.499 [2024-11-20 08:49:05.205026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.499 [2024-11-20 08:49:05.205054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:34.499 [2024-11-20 08:49:05.205072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.499 [2024-11-20 08:49:05.207901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.499 [2024-11-20 08:49:05.207961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:34.499 spare 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.499 [2024-11-20 08:49:05.213014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.499 [2024-11-20 08:49:05.215484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.499 [2024-11-20 08:49:05.215737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:34.499 [2024-11-20 08:49:05.215764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:34.499 [2024-11-20 08:49:05.216073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:34.499 [2024-11-20 08:49:05.216326] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:34.499 [2024-11-20 08:49:05.216357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:34.499 [2024-11-20 08:49:05.216541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.499 "name": "raid_bdev1", 00:15:34.499 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:34.499 "strip_size_kb": 0, 00:15:34.499 "state": "online", 00:15:34.499 "raid_level": "raid1", 00:15:34.499 "superblock": true, 00:15:34.499 "num_base_bdevs": 2, 00:15:34.499 "num_base_bdevs_discovered": 2, 00:15:34.499 "num_base_bdevs_operational": 2, 00:15:34.499 "base_bdevs_list": [ 00:15:34.499 { 00:15:34.499 "name": "BaseBdev1", 00:15:34.499 "uuid": "e8b9acba-b1a8-5443-bde3-2c303ef517dd", 00:15:34.499 "is_configured": true, 00:15:34.499 "data_offset": 2048, 00:15:34.499 "data_size": 63488 00:15:34.499 }, 00:15:34.499 { 00:15:34.499 "name": "BaseBdev2", 00:15:34.499 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:34.499 "is_configured": true, 00:15:34.499 "data_offset": 2048, 00:15:34.499 "data_size": 63488 00:15:34.499 } 00:15:34.499 ] 00:15:34.499 }' 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.499 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:35.066 [2024-11-20 08:49:05.749533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 [2024-11-20 08:49:05.857108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.066 "name": "raid_bdev1", 00:15:35.066 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:35.066 "strip_size_kb": 0, 00:15:35.066 "state": "online", 00:15:35.066 "raid_level": "raid1", 00:15:35.066 "superblock": true, 00:15:35.066 "num_base_bdevs": 2, 00:15:35.066 "num_base_bdevs_discovered": 1, 00:15:35.066 "num_base_bdevs_operational": 1, 00:15:35.066 "base_bdevs_list": [ 00:15:35.066 { 00:15:35.066 "name": null, 00:15:35.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.066 "is_configured": false, 00:15:35.066 "data_offset": 0, 00:15:35.066 "data_size": 63488 00:15:35.066 }, 00:15:35.066 { 00:15:35.066 "name": "BaseBdev2", 00:15:35.066 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:35.066 "is_configured": true, 00:15:35.066 "data_offset": 2048, 00:15:35.066 "data_size": 63488 00:15:35.066 } 00:15:35.066 ] 00:15:35.066 }' 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.066 08:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.325 [2024-11-20 08:49:05.989404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:35.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:35.325 Zero copy mechanism will not be used. 00:15:35.325 Running I/O for 60 seconds... 00:15:35.583 08:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.583 08:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.583 08:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.583 [2024-11-20 08:49:06.406263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.583 08:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.583 08:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.583 [2024-11-20 08:49:06.479017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:35.583 [2024-11-20 08:49:06.481564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.842 [2024-11-20 08:49:06.605948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:35.842 [2024-11-20 08:49:06.606549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:36.099 [2024-11-20 08:49:06.816223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:36.099 [2024-11-20 08:49:06.816481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:36.358 184.00 IOPS, 552.00 MiB/s [2024-11-20T08:49:07.274Z] [2024-11-20 08:49:07.060066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:36.358 [2024-11-20 08:49:07.060597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:36.616 [2024-11-20 08:49:07.284431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:36.616 [2024-11-20 08:49:07.284759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.616 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.617 "name": "raid_bdev1", 00:15:36.617 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:36.617 "strip_size_kb": 0, 00:15:36.617 "state": "online", 00:15:36.617 "raid_level": "raid1", 00:15:36.617 "superblock": true, 00:15:36.617 "num_base_bdevs": 2, 00:15:36.617 "num_base_bdevs_discovered": 2, 00:15:36.617 "num_base_bdevs_operational": 2, 00:15:36.617 "process": { 00:15:36.617 "type": "rebuild", 00:15:36.617 "target": "spare", 00:15:36.617 "progress": { 00:15:36.617 "blocks": 10240, 00:15:36.617 "percent": 16 00:15:36.617 } 00:15:36.617 }, 00:15:36.617 "base_bdevs_list": [ 00:15:36.617 { 00:15:36.617 "name": "spare", 00:15:36.617 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:36.617 "is_configured": true, 00:15:36.617 "data_offset": 2048, 00:15:36.617 "data_size": 63488 00:15:36.617 }, 00:15:36.617 { 00:15:36.617 "name": "BaseBdev2", 00:15:36.617 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:36.617 "is_configured": true, 00:15:36.617 "data_offset": 2048, 00:15:36.617 "data_size": 63488 00:15:36.617 } 00:15:36.617 ] 00:15:36.617 }' 00:15:36.617 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.875 [2024-11-20 08:49:07.622880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.875 [2024-11-20 08:49:07.632905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.875 [2024-11-20 08:49:07.682028] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.875 [2024-11-20 08:49:07.684570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.875 [2024-11-20 08:49:07.684634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.875 [2024-11-20 08:49:07.684649] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.875 [2024-11-20 08:49:07.711853] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.875 "name": "raid_bdev1", 00:15:36.875 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:36.875 "strip_size_kb": 0, 00:15:36.875 "state": "online", 00:15:36.875 "raid_level": "raid1", 00:15:36.875 "superblock": true, 00:15:36.875 "num_base_bdevs": 2, 00:15:36.875 "num_base_bdevs_discovered": 1, 00:15:36.875 "num_base_bdevs_operational": 1, 00:15:36.875 "base_bdevs_list": [ 00:15:36.875 { 00:15:36.875 "name": null, 00:15:36.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.875 "is_configured": false, 00:15:36.875 "data_offset": 0, 00:15:36.875 "data_size": 63488 00:15:36.875 }, 00:15:36.875 { 00:15:36.875 "name": "BaseBdev2", 00:15:36.875 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:36.875 "is_configured": true, 00:15:36.875 "data_offset": 2048, 00:15:36.875 "data_size": 63488 00:15:36.875 } 00:15:36.875 ] 00:15:36.875 }' 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.875 08:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.393 147.50 IOPS, 442.50 MiB/s [2024-11-20T08:49:08.309Z] 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.393 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.651 "name": "raid_bdev1", 00:15:37.651 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:37.651 "strip_size_kb": 0, 00:15:37.651 "state": "online", 00:15:37.651 "raid_level": "raid1", 00:15:37.651 "superblock": true, 00:15:37.651 "num_base_bdevs": 2, 00:15:37.651 "num_base_bdevs_discovered": 1, 00:15:37.651 "num_base_bdevs_operational": 1, 00:15:37.651 "base_bdevs_list": [ 00:15:37.651 { 00:15:37.651 "name": null, 00:15:37.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.651 "is_configured": false, 00:15:37.651 "data_offset": 0, 00:15:37.651 "data_size": 63488 00:15:37.651 }, 00:15:37.651 { 00:15:37.651 "name": "BaseBdev2", 00:15:37.651 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:37.651 "is_configured": true, 00:15:37.651 "data_offset": 2048, 00:15:37.651 "data_size": 63488 00:15:37.651 } 00:15:37.651 ] 00:15:37.651 }' 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.651 [2024-11-20 08:49:08.437074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.651 08:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.651 [2024-11-20 08:49:08.496487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:37.651 [2024-11-20 08:49:08.498902] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.911 [2024-11-20 08:49:08.624411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:37.911 [2024-11-20 08:49:08.624888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:38.169 [2024-11-20 08:49:08.868787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:38.427 156.67 IOPS, 470.00 MiB/s [2024-11-20T08:49:09.343Z] [2024-11-20 08:49:09.219123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:38.685 [2024-11-20 08:49:09.452509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.685 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.685 "name": "raid_bdev1", 00:15:38.685 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:38.685 "strip_size_kb": 0, 00:15:38.685 "state": "online", 00:15:38.685 "raid_level": "raid1", 00:15:38.685 "superblock": true, 00:15:38.685 "num_base_bdevs": 2, 00:15:38.685 "num_base_bdevs_discovered": 2, 00:15:38.685 "num_base_bdevs_operational": 2, 00:15:38.685 "process": { 00:15:38.685 "type": "rebuild", 00:15:38.685 "target": "spare", 00:15:38.685 "progress": { 00:15:38.685 "blocks": 10240, 00:15:38.685 "percent": 16 00:15:38.685 } 00:15:38.685 }, 00:15:38.685 "base_bdevs_list": [ 00:15:38.685 { 00:15:38.685 "name": "spare", 00:15:38.686 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:38.686 "is_configured": true, 00:15:38.686 "data_offset": 2048, 00:15:38.686 "data_size": 63488 00:15:38.686 }, 00:15:38.686 { 00:15:38.686 "name": "BaseBdev2", 00:15:38.686 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:38.686 "is_configured": true, 00:15:38.686 "data_offset": 2048, 00:15:38.686 "data_size": 63488 00:15:38.686 } 00:15:38.686 ] 00:15:38.686 }' 00:15:38.686 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.686 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.686 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:38.944 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.944 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.945 "name": "raid_bdev1", 00:15:38.945 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:38.945 "strip_size_kb": 0, 00:15:38.945 "state": "online", 00:15:38.945 "raid_level": "raid1", 00:15:38.945 "superblock": true, 00:15:38.945 "num_base_bdevs": 2, 00:15:38.945 "num_base_bdevs_discovered": 2, 00:15:38.945 "num_base_bdevs_operational": 2, 00:15:38.945 "process": { 00:15:38.945 "type": "rebuild", 00:15:38.945 "target": "spare", 00:15:38.945 "progress": { 00:15:38.945 "blocks": 12288, 00:15:38.945 "percent": 19 00:15:38.945 } 00:15:38.945 }, 00:15:38.945 "base_bdevs_list": [ 00:15:38.945 { 00:15:38.945 "name": "spare", 00:15:38.945 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:38.945 "is_configured": true, 00:15:38.945 "data_offset": 2048, 00:15:38.945 "data_size": 63488 00:15:38.945 }, 00:15:38.945 { 00:15:38.945 "name": "BaseBdev2", 00:15:38.945 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:38.945 "is_configured": true, 00:15:38.945 "data_offset": 2048, 00:15:38.945 "data_size": 63488 00:15:38.945 } 00:15:38.945 ] 00:15:38.945 }' 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.945 [2024-11-20 08:49:09.735045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.945 08:49:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.770 140.25 IOPS, 420.75 MiB/s [2024-11-20T08:49:10.686Z] [2024-11-20 08:49:10.505713] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.028 "name": "raid_bdev1", 00:15:40.028 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:40.028 "strip_size_kb": 0, 00:15:40.028 "state": "online", 00:15:40.028 "raid_level": "raid1", 00:15:40.028 "superblock": true, 00:15:40.028 "num_base_bdevs": 2, 00:15:40.028 "num_base_bdevs_discovered": 2, 00:15:40.028 "num_base_bdevs_operational": 2, 00:15:40.028 "process": { 00:15:40.028 "type": "rebuild", 00:15:40.028 "target": "spare", 00:15:40.028 "progress": { 00:15:40.028 "blocks": 32768, 00:15:40.028 "percent": 51 00:15:40.028 } 00:15:40.028 }, 00:15:40.028 "base_bdevs_list": [ 00:15:40.028 { 00:15:40.028 "name": "spare", 00:15:40.028 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:40.028 "is_configured": true, 00:15:40.028 "data_offset": 2048, 00:15:40.028 "data_size": 63488 00:15:40.028 }, 00:15:40.028 { 00:15:40.028 "name": "BaseBdev2", 00:15:40.028 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:40.028 "is_configured": true, 00:15:40.028 "data_offset": 2048, 00:15:40.028 "data_size": 63488 00:15:40.028 } 00:15:40.028 ] 00:15:40.028 }' 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.028 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.287 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.287 08:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.545 124.40 IOPS, 373.20 MiB/s [2024-11-20T08:49:11.461Z] [2024-11-20 08:49:11.210180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:40.803 [2024-11-20 08:49:11.546330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.370 08:49:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.370 110.67 IOPS, 332.00 MiB/s [2024-11-20T08:49:12.286Z] 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.370 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.370 "name": "raid_bdev1", 00:15:41.370 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:41.370 "strip_size_kb": 0, 00:15:41.370 "state": "online", 00:15:41.370 "raid_level": "raid1", 00:15:41.370 "superblock": true, 00:15:41.370 "num_base_bdevs": 2, 00:15:41.370 "num_base_bdevs_discovered": 2, 00:15:41.370 "num_base_bdevs_operational": 2, 00:15:41.370 "process": { 00:15:41.370 "type": "rebuild", 00:15:41.370 "target": "spare", 00:15:41.370 "progress": { 00:15:41.370 "blocks": 53248, 00:15:41.370 "percent": 83 00:15:41.370 } 00:15:41.370 }, 00:15:41.370 "base_bdevs_list": [ 00:15:41.370 { 00:15:41.370 "name": "spare", 00:15:41.370 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:41.370 "is_configured": true, 00:15:41.370 "data_offset": 2048, 00:15:41.370 "data_size": 63488 00:15:41.370 }, 00:15:41.370 { 00:15:41.370 "name": "BaseBdev2", 00:15:41.370 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:41.370 "is_configured": true, 00:15:41.370 "data_offset": 2048, 00:15:41.370 "data_size": 63488 00:15:41.370 } 00:15:41.370 ] 00:15:41.370 }' 00:15:41.370 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.370 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.370 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.370 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.370 08:49:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.628 [2024-11-20 08:49:12.531359] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:41.886 [2024-11-20 08:49:12.630839] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:41.886 [2024-11-20 08:49:12.633074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.403 99.14 IOPS, 297.43 MiB/s [2024-11-20T08:49:13.319Z] 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.403 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.403 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.403 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.403 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.404 "name": "raid_bdev1", 00:15:42.404 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:42.404 "strip_size_kb": 0, 00:15:42.404 "state": "online", 00:15:42.404 "raid_level": "raid1", 00:15:42.404 "superblock": true, 00:15:42.404 "num_base_bdevs": 2, 00:15:42.404 "num_base_bdevs_discovered": 2, 00:15:42.404 "num_base_bdevs_operational": 2, 00:15:42.404 "base_bdevs_list": [ 00:15:42.404 { 00:15:42.404 "name": "spare", 00:15:42.404 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:42.404 "is_configured": true, 00:15:42.404 "data_offset": 2048, 00:15:42.404 "data_size": 63488 00:15:42.404 }, 00:15:42.404 { 00:15:42.404 "name": "BaseBdev2", 00:15:42.404 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:42.404 "is_configured": true, 00:15:42.404 "data_offset": 2048, 00:15:42.404 "data_size": 63488 00:15:42.404 } 00:15:42.404 ] 00:15:42.404 }' 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.404 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.663 "name": "raid_bdev1", 00:15:42.663 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:42.663 "strip_size_kb": 0, 00:15:42.663 "state": "online", 00:15:42.663 "raid_level": "raid1", 00:15:42.663 "superblock": true, 00:15:42.663 "num_base_bdevs": 2, 00:15:42.663 "num_base_bdevs_discovered": 2, 00:15:42.663 "num_base_bdevs_operational": 2, 00:15:42.663 "base_bdevs_list": [ 00:15:42.663 { 00:15:42.663 "name": "spare", 00:15:42.663 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:42.663 "is_configured": true, 00:15:42.663 "data_offset": 2048, 00:15:42.663 "data_size": 63488 00:15:42.663 }, 00:15:42.663 { 00:15:42.663 "name": "BaseBdev2", 00:15:42.663 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:42.663 "is_configured": true, 00:15:42.663 "data_offset": 2048, 00:15:42.663 "data_size": 63488 00:15:42.663 } 00:15:42.663 ] 00:15:42.663 }' 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.663 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.663 "name": "raid_bdev1", 00:15:42.663 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:42.663 "strip_size_kb": 0, 00:15:42.663 "state": "online", 00:15:42.663 "raid_level": "raid1", 00:15:42.663 "superblock": true, 00:15:42.663 "num_base_bdevs": 2, 00:15:42.663 "num_base_bdevs_discovered": 2, 00:15:42.663 "num_base_bdevs_operational": 2, 00:15:42.664 "base_bdevs_list": [ 00:15:42.664 { 00:15:42.664 "name": "spare", 00:15:42.664 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:42.664 "is_configured": true, 00:15:42.664 "data_offset": 2048, 00:15:42.664 "data_size": 63488 00:15:42.664 }, 00:15:42.664 { 00:15:42.664 "name": "BaseBdev2", 00:15:42.664 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:42.664 "is_configured": true, 00:15:42.664 "data_offset": 2048, 00:15:42.664 "data_size": 63488 00:15:42.664 } 00:15:42.664 ] 00:15:42.664 }' 00:15:42.664 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.664 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.231 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.231 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.231 08:49:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.231 [2024-11-20 08:49:13.989560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.231 [2024-11-20 08:49:13.989599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.231 91.50 IOPS, 274.50 MiB/s 00:15:43.231 Latency(us) 00:15:43.231 [2024-11-20T08:49:14.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.231 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:43.231 raid_bdev1 : 8.07 91.01 273.02 0.00 0.00 14890.03 266.24 116773.24 00:15:43.231 [2024-11-20T08:49:14.147Z] =================================================================================================================== 00:15:43.231 [2024-11-20T08:49:14.147Z] Total : 91.01 273.02 0.00 0.00 14890.03 266.24 116773.24 00:15:43.231 [2024-11-20 08:49:14.076046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.231 [2024-11-20 08:49:14.076105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.231 [2024-11-20 08:49:14.076233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.231 [2024-11-20 08:49:14.076252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:43.231 { 00:15:43.231 "results": [ 00:15:43.231 { 00:15:43.231 "job": "raid_bdev1", 00:15:43.231 "core_mask": "0x1", 00:15:43.231 "workload": "randrw", 00:15:43.231 "percentage": 50, 00:15:43.231 "status": "finished", 00:15:43.231 "queue_depth": 2, 00:15:43.231 "io_size": 3145728, 00:15:43.231 "runtime": 8.065408, 00:15:43.231 "iops": 91.00593547158432, 00:15:43.231 "mibps": 273.01780641475295, 00:15:43.232 "io_failed": 0, 00:15:43.232 "io_timeout": 0, 00:15:43.232 "avg_latency_us": 14890.032955164726, 00:15:43.232 "min_latency_us": 266.24, 00:15:43.232 "max_latency_us": 116773.23636363636 00:15:43.232 } 00:15:43.232 ], 00:15:43.232 "core_count": 1 00:15:43.232 } 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:43.232 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:43.797 /dev/nbd0 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.797 1+0 records in 00:15:43.797 1+0 records out 00:15:43.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386461 s, 10.6 MB/s 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:43.797 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:44.056 /dev/nbd1 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.056 1+0 records in 00:15:44.056 1+0 records out 00:15:44.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391178 s, 10.5 MB/s 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.056 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.314 08:49:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.572 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:44.829 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.829 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.829 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.830 [2024-11-20 08:49:15.619039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.830 [2024-11-20 08:49:15.619130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.830 [2024-11-20 08:49:15.619207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:44.830 [2024-11-20 08:49:15.619226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.830 [2024-11-20 08:49:15.622185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.830 [2024-11-20 08:49:15.622239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.830 [2024-11-20 08:49:15.622352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:44.830 [2024-11-20 08:49:15.622444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.830 [2024-11-20 08:49:15.622620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.830 spare 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.830 [2024-11-20 08:49:15.722743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:44.830 [2024-11-20 08:49:15.722778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:44.830 [2024-11-20 08:49:15.723156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:44.830 [2024-11-20 08:49:15.723380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:44.830 [2024-11-20 08:49:15.723406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:44.830 [2024-11-20 08:49:15.723622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.830 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.087 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.087 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.087 "name": "raid_bdev1", 00:15:45.087 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:45.087 "strip_size_kb": 0, 00:15:45.087 "state": "online", 00:15:45.087 "raid_level": "raid1", 00:15:45.087 "superblock": true, 00:15:45.087 "num_base_bdevs": 2, 00:15:45.087 "num_base_bdevs_discovered": 2, 00:15:45.087 "num_base_bdevs_operational": 2, 00:15:45.087 "base_bdevs_list": [ 00:15:45.087 { 00:15:45.087 "name": "spare", 00:15:45.087 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:45.087 "is_configured": true, 00:15:45.087 "data_offset": 2048, 00:15:45.087 "data_size": 63488 00:15:45.087 }, 00:15:45.087 { 00:15:45.087 "name": "BaseBdev2", 00:15:45.087 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:45.087 "is_configured": true, 00:15:45.087 "data_offset": 2048, 00:15:45.087 "data_size": 63488 00:15:45.087 } 00:15:45.087 ] 00:15:45.087 }' 00:15:45.087 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.087 08:49:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.345 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.345 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.345 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.345 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.345 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.603 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.603 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.603 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.603 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.603 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.603 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.603 "name": "raid_bdev1", 00:15:45.604 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:45.604 "strip_size_kb": 0, 00:15:45.604 "state": "online", 00:15:45.604 "raid_level": "raid1", 00:15:45.604 "superblock": true, 00:15:45.604 "num_base_bdevs": 2, 00:15:45.604 "num_base_bdevs_discovered": 2, 00:15:45.604 "num_base_bdevs_operational": 2, 00:15:45.604 "base_bdevs_list": [ 00:15:45.604 { 00:15:45.604 "name": "spare", 00:15:45.604 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:45.604 "is_configured": true, 00:15:45.604 "data_offset": 2048, 00:15:45.604 "data_size": 63488 00:15:45.604 }, 00:15:45.604 { 00:15:45.604 "name": "BaseBdev2", 00:15:45.604 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:45.604 "is_configured": true, 00:15:45.604 "data_offset": 2048, 00:15:45.604 "data_size": 63488 00:15:45.604 } 00:15:45.604 ] 00:15:45.604 }' 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 [2024-11-20 08:49:16.463986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.604 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.928 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.928 "name": "raid_bdev1", 00:15:45.928 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:45.928 "strip_size_kb": 0, 00:15:45.928 "state": "online", 00:15:45.928 "raid_level": "raid1", 00:15:45.928 "superblock": true, 00:15:45.928 "num_base_bdevs": 2, 00:15:45.928 "num_base_bdevs_discovered": 1, 00:15:45.928 "num_base_bdevs_operational": 1, 00:15:45.928 "base_bdevs_list": [ 00:15:45.928 { 00:15:45.928 "name": null, 00:15:45.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.928 "is_configured": false, 00:15:45.928 "data_offset": 0, 00:15:45.928 "data_size": 63488 00:15:45.928 }, 00:15:45.928 { 00:15:45.928 "name": "BaseBdev2", 00:15:45.928 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:45.928 "is_configured": true, 00:15:45.928 "data_offset": 2048, 00:15:45.928 "data_size": 63488 00:15:45.928 } 00:15:45.928 ] 00:15:45.928 }' 00:15:45.928 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.928 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.186 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.186 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.186 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.186 [2024-11-20 08:49:16.964260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.186 [2024-11-20 08:49:16.964502] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.186 [2024-11-20 08:49:16.964537] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.186 [2024-11-20 08:49:16.964577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.186 [2024-11-20 08:49:16.980973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:46.186 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.186 08:49:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:46.186 [2024-11-20 08:49:16.983468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.121 08:49:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.121 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.380 "name": "raid_bdev1", 00:15:47.380 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:47.380 "strip_size_kb": 0, 00:15:47.380 "state": "online", 00:15:47.380 "raid_level": "raid1", 00:15:47.380 "superblock": true, 00:15:47.380 "num_base_bdevs": 2, 00:15:47.380 "num_base_bdevs_discovered": 2, 00:15:47.380 "num_base_bdevs_operational": 2, 00:15:47.380 "process": { 00:15:47.380 "type": "rebuild", 00:15:47.380 "target": "spare", 00:15:47.380 "progress": { 00:15:47.380 "blocks": 20480, 00:15:47.380 "percent": 32 00:15:47.380 } 00:15:47.380 }, 00:15:47.380 "base_bdevs_list": [ 00:15:47.380 { 00:15:47.380 "name": "spare", 00:15:47.380 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:47.380 "is_configured": true, 00:15:47.380 "data_offset": 2048, 00:15:47.380 "data_size": 63488 00:15:47.380 }, 00:15:47.380 { 00:15:47.380 "name": "BaseBdev2", 00:15:47.380 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:47.380 "is_configured": true, 00:15:47.380 "data_offset": 2048, 00:15:47.380 "data_size": 63488 00:15:47.380 } 00:15:47.380 ] 00:15:47.380 }' 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 [2024-11-20 08:49:18.152964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.380 [2024-11-20 08:49:18.191698] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.380 [2024-11-20 08:49:18.191809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.380 [2024-11-20 08:49:18.191831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.380 [2024-11-20 08:49:18.191845] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.380 "name": "raid_bdev1", 00:15:47.380 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:47.380 "strip_size_kb": 0, 00:15:47.380 "state": "online", 00:15:47.380 "raid_level": "raid1", 00:15:47.380 "superblock": true, 00:15:47.380 "num_base_bdevs": 2, 00:15:47.380 "num_base_bdevs_discovered": 1, 00:15:47.380 "num_base_bdevs_operational": 1, 00:15:47.380 "base_bdevs_list": [ 00:15:47.380 { 00:15:47.380 "name": null, 00:15:47.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.380 "is_configured": false, 00:15:47.380 "data_offset": 0, 00:15:47.380 "data_size": 63488 00:15:47.380 }, 00:15:47.380 { 00:15:47.380 "name": "BaseBdev2", 00:15:47.380 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:47.380 "is_configured": true, 00:15:47.380 "data_offset": 2048, 00:15:47.380 "data_size": 63488 00:15:47.380 } 00:15:47.380 ] 00:15:47.380 }' 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.380 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.947 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:47.947 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.947 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.947 [2024-11-20 08:49:18.777530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:47.947 [2024-11-20 08:49:18.777627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.947 [2024-11-20 08:49:18.777661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:47.947 [2024-11-20 08:49:18.777679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.947 [2024-11-20 08:49:18.778296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.947 [2024-11-20 08:49:18.778339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:47.947 [2024-11-20 08:49:18.778460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:47.947 [2024-11-20 08:49:18.778491] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.947 [2024-11-20 08:49:18.778506] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:47.947 [2024-11-20 08:49:18.778542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.947 [2024-11-20 08:49:18.794596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:47.947 spare 00:15:47.947 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.947 08:49:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:47.947 [2024-11-20 08:49:18.797030] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.897 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.155 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.156 "name": "raid_bdev1", 00:15:49.156 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:49.156 "strip_size_kb": 0, 00:15:49.156 "state": "online", 00:15:49.156 "raid_level": "raid1", 00:15:49.156 "superblock": true, 00:15:49.156 "num_base_bdevs": 2, 00:15:49.156 "num_base_bdevs_discovered": 2, 00:15:49.156 "num_base_bdevs_operational": 2, 00:15:49.156 "process": { 00:15:49.156 "type": "rebuild", 00:15:49.156 "target": "spare", 00:15:49.156 "progress": { 00:15:49.156 "blocks": 20480, 00:15:49.156 "percent": 32 00:15:49.156 } 00:15:49.156 }, 00:15:49.156 "base_bdevs_list": [ 00:15:49.156 { 00:15:49.156 "name": "spare", 00:15:49.156 "uuid": "057e089f-3de7-5341-9b7f-94d1a4f4db98", 00:15:49.156 "is_configured": true, 00:15:49.156 "data_offset": 2048, 00:15:49.156 "data_size": 63488 00:15:49.156 }, 00:15:49.156 { 00:15:49.156 "name": "BaseBdev2", 00:15:49.156 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:49.156 "is_configured": true, 00:15:49.156 "data_offset": 2048, 00:15:49.156 "data_size": 63488 00:15:49.156 } 00:15:49.156 ] 00:15:49.156 }' 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.156 08:49:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.156 [2024-11-20 08:49:19.974360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.156 [2024-11-20 08:49:20.005063] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.156 [2024-11-20 08:49:20.005148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.156 [2024-11-20 08:49:20.005194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.156 [2024-11-20 08:49:20.005207] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.156 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.414 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.414 "name": "raid_bdev1", 00:15:49.414 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:49.414 "strip_size_kb": 0, 00:15:49.414 "state": "online", 00:15:49.414 "raid_level": "raid1", 00:15:49.414 "superblock": true, 00:15:49.414 "num_base_bdevs": 2, 00:15:49.414 "num_base_bdevs_discovered": 1, 00:15:49.414 "num_base_bdevs_operational": 1, 00:15:49.414 "base_bdevs_list": [ 00:15:49.414 { 00:15:49.414 "name": null, 00:15:49.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.414 "is_configured": false, 00:15:49.414 "data_offset": 0, 00:15:49.414 "data_size": 63488 00:15:49.414 }, 00:15:49.414 { 00:15:49.414 "name": "BaseBdev2", 00:15:49.414 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:49.414 "is_configured": true, 00:15:49.414 "data_offset": 2048, 00:15:49.414 "data_size": 63488 00:15:49.414 } 00:15:49.414 ] 00:15:49.414 }' 00:15:49.414 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.414 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.673 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.931 "name": "raid_bdev1", 00:15:49.931 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:49.931 "strip_size_kb": 0, 00:15:49.931 "state": "online", 00:15:49.931 "raid_level": "raid1", 00:15:49.931 "superblock": true, 00:15:49.931 "num_base_bdevs": 2, 00:15:49.931 "num_base_bdevs_discovered": 1, 00:15:49.931 "num_base_bdevs_operational": 1, 00:15:49.931 "base_bdevs_list": [ 00:15:49.931 { 00:15:49.931 "name": null, 00:15:49.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.931 "is_configured": false, 00:15:49.931 "data_offset": 0, 00:15:49.931 "data_size": 63488 00:15:49.931 }, 00:15:49.931 { 00:15:49.931 "name": "BaseBdev2", 00:15:49.931 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:49.931 "is_configured": true, 00:15:49.931 "data_offset": 2048, 00:15:49.931 "data_size": 63488 00:15:49.931 } 00:15:49.931 ] 00:15:49.931 }' 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:49.931 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.932 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.932 [2024-11-20 08:49:20.730271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:49.932 [2024-11-20 08:49:20.730343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.932 [2024-11-20 08:49:20.730380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:49.932 [2024-11-20 08:49:20.730395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.932 [2024-11-20 08:49:20.730977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.932 [2024-11-20 08:49:20.731007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.932 [2024-11-20 08:49:20.731131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:49.932 [2024-11-20 08:49:20.731152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.932 [2024-11-20 08:49:20.731181] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:49.932 [2024-11-20 08:49:20.731196] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:49.932 BaseBdev1 00:15:49.932 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.932 08:49:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.866 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.126 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.126 "name": "raid_bdev1", 00:15:51.126 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:51.126 "strip_size_kb": 0, 00:15:51.126 "state": "online", 00:15:51.126 "raid_level": "raid1", 00:15:51.126 "superblock": true, 00:15:51.126 "num_base_bdevs": 2, 00:15:51.126 "num_base_bdevs_discovered": 1, 00:15:51.126 "num_base_bdevs_operational": 1, 00:15:51.126 "base_bdevs_list": [ 00:15:51.126 { 00:15:51.126 "name": null, 00:15:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.126 "is_configured": false, 00:15:51.126 "data_offset": 0, 00:15:51.126 "data_size": 63488 00:15:51.126 }, 00:15:51.126 { 00:15:51.126 "name": "BaseBdev2", 00:15:51.126 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:51.126 "is_configured": true, 00:15:51.126 "data_offset": 2048, 00:15:51.126 "data_size": 63488 00:15:51.126 } 00:15:51.126 ] 00:15:51.126 }' 00:15:51.126 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.126 08:49:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.388 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.650 "name": "raid_bdev1", 00:15:51.650 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:51.650 "strip_size_kb": 0, 00:15:51.650 "state": "online", 00:15:51.650 "raid_level": "raid1", 00:15:51.650 "superblock": true, 00:15:51.650 "num_base_bdevs": 2, 00:15:51.650 "num_base_bdevs_discovered": 1, 00:15:51.650 "num_base_bdevs_operational": 1, 00:15:51.650 "base_bdevs_list": [ 00:15:51.650 { 00:15:51.650 "name": null, 00:15:51.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.650 "is_configured": false, 00:15:51.650 "data_offset": 0, 00:15:51.650 "data_size": 63488 00:15:51.650 }, 00:15:51.650 { 00:15:51.650 "name": "BaseBdev2", 00:15:51.650 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:51.650 "is_configured": true, 00:15:51.650 "data_offset": 2048, 00:15:51.650 "data_size": 63488 00:15:51.650 } 00:15:51.650 ] 00:15:51.650 }' 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.650 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.650 [2024-11-20 08:49:22.443018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.651 [2024-11-20 08:49:22.443270] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.651 [2024-11-20 08:49:22.443294] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:51.651 request: 00:15:51.651 { 00:15:51.651 "base_bdev": "BaseBdev1", 00:15:51.651 "raid_bdev": "raid_bdev1", 00:15:51.651 "method": "bdev_raid_add_base_bdev", 00:15:51.651 "req_id": 1 00:15:51.651 } 00:15:51.651 Got JSON-RPC error response 00:15:51.651 response: 00:15:51.651 { 00:15:51.651 "code": -22, 00:15:51.651 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:51.651 } 00:15:51.651 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:51.651 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:51.651 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.651 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.651 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.651 08:49:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.587 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.846 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.846 "name": "raid_bdev1", 00:15:52.846 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:52.846 "strip_size_kb": 0, 00:15:52.846 "state": "online", 00:15:52.846 "raid_level": "raid1", 00:15:52.846 "superblock": true, 00:15:52.846 "num_base_bdevs": 2, 00:15:52.846 "num_base_bdevs_discovered": 1, 00:15:52.846 "num_base_bdevs_operational": 1, 00:15:52.846 "base_bdevs_list": [ 00:15:52.846 { 00:15:52.846 "name": null, 00:15:52.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.846 "is_configured": false, 00:15:52.846 "data_offset": 0, 00:15:52.846 "data_size": 63488 00:15:52.846 }, 00:15:52.846 { 00:15:52.846 "name": "BaseBdev2", 00:15:52.846 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:52.846 "is_configured": true, 00:15:52.846 "data_offset": 2048, 00:15:52.846 "data_size": 63488 00:15:52.846 } 00:15:52.846 ] 00:15:52.846 }' 00:15:52.846 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.846 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.104 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.105 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.105 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.105 08:49:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.105 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.105 "name": "raid_bdev1", 00:15:53.105 "uuid": "4ceb295c-e2f5-400a-b5d0-fc0e517b579d", 00:15:53.105 "strip_size_kb": 0, 00:15:53.105 "state": "online", 00:15:53.105 "raid_level": "raid1", 00:15:53.105 "superblock": true, 00:15:53.105 "num_base_bdevs": 2, 00:15:53.105 "num_base_bdevs_discovered": 1, 00:15:53.105 "num_base_bdevs_operational": 1, 00:15:53.105 "base_bdevs_list": [ 00:15:53.105 { 00:15:53.105 "name": null, 00:15:53.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.105 "is_configured": false, 00:15:53.105 "data_offset": 0, 00:15:53.105 "data_size": 63488 00:15:53.105 }, 00:15:53.105 { 00:15:53.105 "name": "BaseBdev2", 00:15:53.105 "uuid": "a530dfc7-f658-5f93-b97e-bcfb780db0b1", 00:15:53.105 "is_configured": true, 00:15:53.105 "data_offset": 2048, 00:15:53.105 "data_size": 63488 00:15:53.105 } 00:15:53.105 ] 00:15:53.105 }' 00:15:53.105 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.363 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.363 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.363 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.363 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77096 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77096 ']' 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77096 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77096 00:15:53.364 killing process with pid 77096 00:15:53.364 Received shutdown signal, test time was about 18.153569 seconds 00:15:53.364 00:15:53.364 Latency(us) 00:15:53.364 [2024-11-20T08:49:24.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.364 [2024-11-20T08:49:24.280Z] =================================================================================================================== 00:15:53.364 [2024-11-20T08:49:24.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77096' 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77096 00:15:53.364 [2024-11-20 08:49:24.145669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.364 08:49:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77096 00:15:53.364 [2024-11-20 08:49:24.145821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.364 [2024-11-20 08:49:24.145921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.364 [2024-11-20 08:49:24.145948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:53.623 [2024-11-20 08:49:24.340044] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.561 08:49:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:54.561 00:15:54.561 real 0m21.433s 00:15:54.561 user 0m29.295s 00:15:54.561 sys 0m1.991s 00:15:54.561 08:49:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.561 08:49:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.561 ************************************ 00:15:54.561 END TEST raid_rebuild_test_sb_io 00:15:54.561 ************************************ 00:15:54.561 08:49:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:54.561 08:49:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:54.561 08:49:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:54.561 08:49:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.561 08:49:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.561 ************************************ 00:15:54.561 START TEST raid_rebuild_test 00:15:54.561 ************************************ 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77794 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77794 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77794 ']' 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.562 08:49:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.820 [2024-11-20 08:49:25.566875] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:15:54.820 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.820 Zero copy mechanism will not be used. 00:15:54.820 [2024-11-20 08:49:25.567231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77794 ] 00:15:55.079 [2024-11-20 08:49:25.750711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.079 [2024-11-20 08:49:25.873498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.337 [2024-11-20 08:49:26.073862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.337 [2024-11-20 08:49:26.073913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 BaseBdev1_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 [2024-11-20 08:49:26.602108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.904 [2024-11-20 08:49:26.602231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.904 [2024-11-20 08:49:26.602274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.904 [2024-11-20 08:49:26.602293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.904 [2024-11-20 08:49:26.605396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.904 [2024-11-20 08:49:26.605615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.904 BaseBdev1 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 BaseBdev2_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 [2024-11-20 08:49:26.653225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.904 [2024-11-20 08:49:26.653314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.904 [2024-11-20 08:49:26.653342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.904 [2024-11-20 08:49:26.653362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.904 [2024-11-20 08:49:26.656242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.904 [2024-11-20 08:49:26.656292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.904 BaseBdev2 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 BaseBdev3_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 [2024-11-20 08:49:26.713599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:55.904 [2024-11-20 08:49:26.713707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.904 [2024-11-20 08:49:26.713742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.904 [2024-11-20 08:49:26.713761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.904 [2024-11-20 08:49:26.716479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.904 [2024-11-20 08:49:26.716677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.904 BaseBdev3 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 BaseBdev4_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 [2024-11-20 08:49:26.761770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:55.904 [2024-11-20 08:49:26.761843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.904 [2024-11-20 08:49:26.761872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:55.904 [2024-11-20 08:49:26.761890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.904 [2024-11-20 08:49:26.764627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.904 [2024-11-20 08:49:26.764682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.904 BaseBdev4 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.904 spare_malloc 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.904 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.163 spare_delay 00:15:56.163 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.163 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:56.163 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.163 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.163 [2024-11-20 08:49:26.821447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:56.163 [2024-11-20 08:49:26.821525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.164 [2024-11-20 08:49:26.821555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:56.164 [2024-11-20 08:49:26.821573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.164 [2024-11-20 08:49:26.824341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.164 [2024-11-20 08:49:26.824394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:56.164 spare 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 [2024-11-20 08:49:26.829503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.164 [2024-11-20 08:49:26.831993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.164 [2024-11-20 08:49:26.832242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.164 [2024-11-20 08:49:26.832374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.164 [2024-11-20 08:49:26.832526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:56.164 [2024-11-20 08:49:26.832651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:56.164 [2024-11-20 08:49:26.833027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:56.164 [2024-11-20 08:49:26.833379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:56.164 [2024-11-20 08:49:26.833509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:56.164 [2024-11-20 08:49:26.833888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.164 "name": "raid_bdev1", 00:15:56.164 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:15:56.164 "strip_size_kb": 0, 00:15:56.164 "state": "online", 00:15:56.164 "raid_level": "raid1", 00:15:56.164 "superblock": false, 00:15:56.164 "num_base_bdevs": 4, 00:15:56.164 "num_base_bdevs_discovered": 4, 00:15:56.164 "num_base_bdevs_operational": 4, 00:15:56.164 "base_bdevs_list": [ 00:15:56.164 { 00:15:56.164 "name": "BaseBdev1", 00:15:56.164 "uuid": "549ec4fd-5b17-5f4c-9aad-5cc58023ebc0", 00:15:56.164 "is_configured": true, 00:15:56.164 "data_offset": 0, 00:15:56.164 "data_size": 65536 00:15:56.164 }, 00:15:56.164 { 00:15:56.164 "name": "BaseBdev2", 00:15:56.164 "uuid": "d8c3b6d3-a891-5223-a890-b684e2a41c0c", 00:15:56.164 "is_configured": true, 00:15:56.164 "data_offset": 0, 00:15:56.164 "data_size": 65536 00:15:56.164 }, 00:15:56.164 { 00:15:56.164 "name": "BaseBdev3", 00:15:56.164 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:15:56.164 "is_configured": true, 00:15:56.164 "data_offset": 0, 00:15:56.164 "data_size": 65536 00:15:56.164 }, 00:15:56.164 { 00:15:56.164 "name": "BaseBdev4", 00:15:56.164 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:15:56.164 "is_configured": true, 00:15:56.164 "data_offset": 0, 00:15:56.164 "data_size": 65536 00:15:56.164 } 00:15:56.164 ] 00:15:56.164 }' 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.164 08:49:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.433 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.433 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:56.433 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.433 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.707 [2024-11-20 08:49:27.342418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.707 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.966 [2024-11-20 08:49:27.734153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:56.966 /dev/nbd0 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.966 1+0 records in 00:15:56.966 1+0 records out 00:15:56.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317423 s, 12.9 MB/s 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:56.966 08:49:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:05.084 65536+0 records in 00:16:05.084 65536+0 records out 00:16:05.084 33554432 bytes (34 MB, 32 MiB) copied, 8.08703 s, 4.1 MB/s 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.084 08:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.343 [2024-11-20 08:49:36.160823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.343 [2024-11-20 08:49:36.192899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.343 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.344 "name": "raid_bdev1", 00:16:05.344 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:05.344 "strip_size_kb": 0, 00:16:05.344 "state": "online", 00:16:05.344 "raid_level": "raid1", 00:16:05.344 "superblock": false, 00:16:05.344 "num_base_bdevs": 4, 00:16:05.344 "num_base_bdevs_discovered": 3, 00:16:05.344 "num_base_bdevs_operational": 3, 00:16:05.344 "base_bdevs_list": [ 00:16:05.344 { 00:16:05.344 "name": null, 00:16:05.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.344 "is_configured": false, 00:16:05.344 "data_offset": 0, 00:16:05.344 "data_size": 65536 00:16:05.344 }, 00:16:05.344 { 00:16:05.344 "name": "BaseBdev2", 00:16:05.344 "uuid": "d8c3b6d3-a891-5223-a890-b684e2a41c0c", 00:16:05.344 "is_configured": true, 00:16:05.344 "data_offset": 0, 00:16:05.344 "data_size": 65536 00:16:05.344 }, 00:16:05.344 { 00:16:05.344 "name": "BaseBdev3", 00:16:05.344 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:05.344 "is_configured": true, 00:16:05.344 "data_offset": 0, 00:16:05.344 "data_size": 65536 00:16:05.344 }, 00:16:05.344 { 00:16:05.344 "name": "BaseBdev4", 00:16:05.344 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:05.344 "is_configured": true, 00:16:05.344 "data_offset": 0, 00:16:05.344 "data_size": 65536 00:16:05.344 } 00:16:05.344 ] 00:16:05.344 }' 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.344 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.912 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.912 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.912 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.912 [2024-11-20 08:49:36.713095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.912 [2024-11-20 08:49:36.727670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:05.912 08:49:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.912 08:49:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.912 [2024-11-20 08:49:36.730256] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.846 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.105 "name": "raid_bdev1", 00:16:07.105 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:07.105 "strip_size_kb": 0, 00:16:07.105 "state": "online", 00:16:07.105 "raid_level": "raid1", 00:16:07.105 "superblock": false, 00:16:07.105 "num_base_bdevs": 4, 00:16:07.105 "num_base_bdevs_discovered": 4, 00:16:07.105 "num_base_bdevs_operational": 4, 00:16:07.105 "process": { 00:16:07.105 "type": "rebuild", 00:16:07.105 "target": "spare", 00:16:07.105 "progress": { 00:16:07.105 "blocks": 20480, 00:16:07.105 "percent": 31 00:16:07.105 } 00:16:07.105 }, 00:16:07.105 "base_bdevs_list": [ 00:16:07.105 { 00:16:07.105 "name": "spare", 00:16:07.105 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:07.105 "is_configured": true, 00:16:07.105 "data_offset": 0, 00:16:07.105 "data_size": 65536 00:16:07.105 }, 00:16:07.105 { 00:16:07.105 "name": "BaseBdev2", 00:16:07.105 "uuid": "d8c3b6d3-a891-5223-a890-b684e2a41c0c", 00:16:07.105 "is_configured": true, 00:16:07.105 "data_offset": 0, 00:16:07.105 "data_size": 65536 00:16:07.105 }, 00:16:07.105 { 00:16:07.105 "name": "BaseBdev3", 00:16:07.105 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:07.105 "is_configured": true, 00:16:07.105 "data_offset": 0, 00:16:07.105 "data_size": 65536 00:16:07.105 }, 00:16:07.105 { 00:16:07.105 "name": "BaseBdev4", 00:16:07.105 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:07.105 "is_configured": true, 00:16:07.105 "data_offset": 0, 00:16:07.105 "data_size": 65536 00:16:07.105 } 00:16:07.105 ] 00:16:07.105 }' 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.105 [2024-11-20 08:49:37.892064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.105 [2024-11-20 08:49:37.938748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.105 [2024-11-20 08:49:37.939023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.105 [2024-11-20 08:49:37.939055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.105 [2024-11-20 08:49:37.939072] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.105 08:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.105 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.105 "name": "raid_bdev1", 00:16:07.105 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:07.105 "strip_size_kb": 0, 00:16:07.105 "state": "online", 00:16:07.105 "raid_level": "raid1", 00:16:07.105 "superblock": false, 00:16:07.105 "num_base_bdevs": 4, 00:16:07.105 "num_base_bdevs_discovered": 3, 00:16:07.105 "num_base_bdevs_operational": 3, 00:16:07.105 "base_bdevs_list": [ 00:16:07.105 { 00:16:07.106 "name": null, 00:16:07.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.106 "is_configured": false, 00:16:07.106 "data_offset": 0, 00:16:07.106 "data_size": 65536 00:16:07.106 }, 00:16:07.106 { 00:16:07.106 "name": "BaseBdev2", 00:16:07.106 "uuid": "d8c3b6d3-a891-5223-a890-b684e2a41c0c", 00:16:07.106 "is_configured": true, 00:16:07.106 "data_offset": 0, 00:16:07.106 "data_size": 65536 00:16:07.106 }, 00:16:07.106 { 00:16:07.106 "name": "BaseBdev3", 00:16:07.106 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:07.106 "is_configured": true, 00:16:07.106 "data_offset": 0, 00:16:07.106 "data_size": 65536 00:16:07.106 }, 00:16:07.106 { 00:16:07.106 "name": "BaseBdev4", 00:16:07.106 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:07.106 "is_configured": true, 00:16:07.106 "data_offset": 0, 00:16:07.106 "data_size": 65536 00:16:07.106 } 00:16:07.106 ] 00:16:07.106 }' 00:16:07.106 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.106 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.672 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.673 "name": "raid_bdev1", 00:16:07.673 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:07.673 "strip_size_kb": 0, 00:16:07.673 "state": "online", 00:16:07.673 "raid_level": "raid1", 00:16:07.673 "superblock": false, 00:16:07.673 "num_base_bdevs": 4, 00:16:07.673 "num_base_bdevs_discovered": 3, 00:16:07.673 "num_base_bdevs_operational": 3, 00:16:07.673 "base_bdevs_list": [ 00:16:07.673 { 00:16:07.673 "name": null, 00:16:07.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.673 "is_configured": false, 00:16:07.673 "data_offset": 0, 00:16:07.673 "data_size": 65536 00:16:07.673 }, 00:16:07.673 { 00:16:07.673 "name": "BaseBdev2", 00:16:07.673 "uuid": "d8c3b6d3-a891-5223-a890-b684e2a41c0c", 00:16:07.673 "is_configured": true, 00:16:07.673 "data_offset": 0, 00:16:07.673 "data_size": 65536 00:16:07.673 }, 00:16:07.673 { 00:16:07.673 "name": "BaseBdev3", 00:16:07.673 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:07.673 "is_configured": true, 00:16:07.673 "data_offset": 0, 00:16:07.673 "data_size": 65536 00:16:07.673 }, 00:16:07.673 { 00:16:07.673 "name": "BaseBdev4", 00:16:07.673 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:07.673 "is_configured": true, 00:16:07.673 "data_offset": 0, 00:16:07.673 "data_size": 65536 00:16:07.673 } 00:16:07.673 ] 00:16:07.673 }' 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.673 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.933 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.933 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.933 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.933 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.933 [2024-11-20 08:49:38.634846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.933 [2024-11-20 08:49:38.648417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:07.933 08:49:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.933 08:49:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.933 [2024-11-20 08:49:38.651100] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.886 "name": "raid_bdev1", 00:16:08.886 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:08.886 "strip_size_kb": 0, 00:16:08.886 "state": "online", 00:16:08.886 "raid_level": "raid1", 00:16:08.886 "superblock": false, 00:16:08.886 "num_base_bdevs": 4, 00:16:08.886 "num_base_bdevs_discovered": 4, 00:16:08.886 "num_base_bdevs_operational": 4, 00:16:08.886 "process": { 00:16:08.886 "type": "rebuild", 00:16:08.886 "target": "spare", 00:16:08.886 "progress": { 00:16:08.886 "blocks": 20480, 00:16:08.886 "percent": 31 00:16:08.886 } 00:16:08.886 }, 00:16:08.886 "base_bdevs_list": [ 00:16:08.886 { 00:16:08.886 "name": "spare", 00:16:08.886 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:08.886 "is_configured": true, 00:16:08.886 "data_offset": 0, 00:16:08.886 "data_size": 65536 00:16:08.886 }, 00:16:08.886 { 00:16:08.886 "name": "BaseBdev2", 00:16:08.886 "uuid": "d8c3b6d3-a891-5223-a890-b684e2a41c0c", 00:16:08.886 "is_configured": true, 00:16:08.886 "data_offset": 0, 00:16:08.886 "data_size": 65536 00:16:08.886 }, 00:16:08.886 { 00:16:08.886 "name": "BaseBdev3", 00:16:08.886 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:08.886 "is_configured": true, 00:16:08.886 "data_offset": 0, 00:16:08.886 "data_size": 65536 00:16:08.886 }, 00:16:08.886 { 00:16:08.886 "name": "BaseBdev4", 00:16:08.886 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:08.886 "is_configured": true, 00:16:08.886 "data_offset": 0, 00:16:08.886 "data_size": 65536 00:16:08.886 } 00:16:08.886 ] 00:16:08.886 }' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.886 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.886 [2024-11-20 08:49:39.796549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.144 [2024-11-20 08:49:39.859482] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.144 "name": "raid_bdev1", 00:16:09.144 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:09.144 "strip_size_kb": 0, 00:16:09.144 "state": "online", 00:16:09.144 "raid_level": "raid1", 00:16:09.144 "superblock": false, 00:16:09.144 "num_base_bdevs": 4, 00:16:09.144 "num_base_bdevs_discovered": 3, 00:16:09.144 "num_base_bdevs_operational": 3, 00:16:09.144 "process": { 00:16:09.144 "type": "rebuild", 00:16:09.144 "target": "spare", 00:16:09.144 "progress": { 00:16:09.144 "blocks": 24576, 00:16:09.144 "percent": 37 00:16:09.144 } 00:16:09.144 }, 00:16:09.144 "base_bdevs_list": [ 00:16:09.144 { 00:16:09.144 "name": "spare", 00:16:09.144 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:09.144 "is_configured": true, 00:16:09.144 "data_offset": 0, 00:16:09.144 "data_size": 65536 00:16:09.144 }, 00:16:09.144 { 00:16:09.144 "name": null, 00:16:09.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.144 "is_configured": false, 00:16:09.144 "data_offset": 0, 00:16:09.144 "data_size": 65536 00:16:09.144 }, 00:16:09.144 { 00:16:09.144 "name": "BaseBdev3", 00:16:09.144 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:09.144 "is_configured": true, 00:16:09.144 "data_offset": 0, 00:16:09.144 "data_size": 65536 00:16:09.144 }, 00:16:09.144 { 00:16:09.144 "name": "BaseBdev4", 00:16:09.144 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:09.144 "is_configured": true, 00:16:09.144 "data_offset": 0, 00:16:09.144 "data_size": 65536 00:16:09.144 } 00:16:09.144 ] 00:16:09.144 }' 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.144 08:49:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.144 08:49:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.403 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.403 "name": "raid_bdev1", 00:16:09.403 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:09.403 "strip_size_kb": 0, 00:16:09.403 "state": "online", 00:16:09.403 "raid_level": "raid1", 00:16:09.403 "superblock": false, 00:16:09.403 "num_base_bdevs": 4, 00:16:09.403 "num_base_bdevs_discovered": 3, 00:16:09.403 "num_base_bdevs_operational": 3, 00:16:09.403 "process": { 00:16:09.403 "type": "rebuild", 00:16:09.403 "target": "spare", 00:16:09.403 "progress": { 00:16:09.403 "blocks": 26624, 00:16:09.403 "percent": 40 00:16:09.403 } 00:16:09.403 }, 00:16:09.403 "base_bdevs_list": [ 00:16:09.403 { 00:16:09.403 "name": "spare", 00:16:09.403 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:09.403 "is_configured": true, 00:16:09.403 "data_offset": 0, 00:16:09.403 "data_size": 65536 00:16:09.403 }, 00:16:09.403 { 00:16:09.403 "name": null, 00:16:09.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.403 "is_configured": false, 00:16:09.403 "data_offset": 0, 00:16:09.403 "data_size": 65536 00:16:09.403 }, 00:16:09.403 { 00:16:09.403 "name": "BaseBdev3", 00:16:09.403 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:09.403 "is_configured": true, 00:16:09.403 "data_offset": 0, 00:16:09.403 "data_size": 65536 00:16:09.403 }, 00:16:09.403 { 00:16:09.403 "name": "BaseBdev4", 00:16:09.403 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:09.403 "is_configured": true, 00:16:09.403 "data_offset": 0, 00:16:09.403 "data_size": 65536 00:16:09.403 } 00:16:09.403 ] 00:16:09.403 }' 00:16:09.403 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.403 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.403 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.403 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.403 08:49:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.339 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.339 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.339 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.340 "name": "raid_bdev1", 00:16:10.340 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:10.340 "strip_size_kb": 0, 00:16:10.340 "state": "online", 00:16:10.340 "raid_level": "raid1", 00:16:10.340 "superblock": false, 00:16:10.340 "num_base_bdevs": 4, 00:16:10.340 "num_base_bdevs_discovered": 3, 00:16:10.340 "num_base_bdevs_operational": 3, 00:16:10.340 "process": { 00:16:10.340 "type": "rebuild", 00:16:10.340 "target": "spare", 00:16:10.340 "progress": { 00:16:10.340 "blocks": 51200, 00:16:10.340 "percent": 78 00:16:10.340 } 00:16:10.340 }, 00:16:10.340 "base_bdevs_list": [ 00:16:10.340 { 00:16:10.340 "name": "spare", 00:16:10.340 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:10.340 "is_configured": true, 00:16:10.340 "data_offset": 0, 00:16:10.340 "data_size": 65536 00:16:10.340 }, 00:16:10.340 { 00:16:10.340 "name": null, 00:16:10.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.340 "is_configured": false, 00:16:10.340 "data_offset": 0, 00:16:10.340 "data_size": 65536 00:16:10.340 }, 00:16:10.340 { 00:16:10.340 "name": "BaseBdev3", 00:16:10.340 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:10.340 "is_configured": true, 00:16:10.340 "data_offset": 0, 00:16:10.340 "data_size": 65536 00:16:10.340 }, 00:16:10.340 { 00:16:10.340 "name": "BaseBdev4", 00:16:10.340 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:10.340 "is_configured": true, 00:16:10.340 "data_offset": 0, 00:16:10.340 "data_size": 65536 00:16:10.340 } 00:16:10.340 ] 00:16:10.340 }' 00:16:10.340 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.598 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.598 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.598 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.598 08:49:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.164 [2024-11-20 08:49:41.873168] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:11.164 [2024-11-20 08:49:41.873290] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:11.164 [2024-11-20 08:49:41.873362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.422 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.681 "name": "raid_bdev1", 00:16:11.681 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:11.681 "strip_size_kb": 0, 00:16:11.681 "state": "online", 00:16:11.681 "raid_level": "raid1", 00:16:11.681 "superblock": false, 00:16:11.681 "num_base_bdevs": 4, 00:16:11.681 "num_base_bdevs_discovered": 3, 00:16:11.681 "num_base_bdevs_operational": 3, 00:16:11.681 "base_bdevs_list": [ 00:16:11.681 { 00:16:11.681 "name": "spare", 00:16:11.681 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:11.681 "is_configured": true, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 }, 00:16:11.681 { 00:16:11.681 "name": null, 00:16:11.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.681 "is_configured": false, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 }, 00:16:11.681 { 00:16:11.681 "name": "BaseBdev3", 00:16:11.681 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:11.681 "is_configured": true, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 }, 00:16:11.681 { 00:16:11.681 "name": "BaseBdev4", 00:16:11.681 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:11.681 "is_configured": true, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 } 00:16:11.681 ] 00:16:11.681 }' 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.681 "name": "raid_bdev1", 00:16:11.681 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:11.681 "strip_size_kb": 0, 00:16:11.681 "state": "online", 00:16:11.681 "raid_level": "raid1", 00:16:11.681 "superblock": false, 00:16:11.681 "num_base_bdevs": 4, 00:16:11.681 "num_base_bdevs_discovered": 3, 00:16:11.681 "num_base_bdevs_operational": 3, 00:16:11.681 "base_bdevs_list": [ 00:16:11.681 { 00:16:11.681 "name": "spare", 00:16:11.681 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:11.681 "is_configured": true, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 }, 00:16:11.681 { 00:16:11.681 "name": null, 00:16:11.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.681 "is_configured": false, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 }, 00:16:11.681 { 00:16:11.681 "name": "BaseBdev3", 00:16:11.681 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:11.681 "is_configured": true, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 }, 00:16:11.681 { 00:16:11.681 "name": "BaseBdev4", 00:16:11.681 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:11.681 "is_configured": true, 00:16:11.681 "data_offset": 0, 00:16:11.681 "data_size": 65536 00:16:11.681 } 00:16:11.681 ] 00:16:11.681 }' 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.681 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.940 "name": "raid_bdev1", 00:16:11.940 "uuid": "786d8b1b-0e54-4b80-b045-4d827d1514cf", 00:16:11.940 "strip_size_kb": 0, 00:16:11.940 "state": "online", 00:16:11.940 "raid_level": "raid1", 00:16:11.940 "superblock": false, 00:16:11.940 "num_base_bdevs": 4, 00:16:11.940 "num_base_bdevs_discovered": 3, 00:16:11.940 "num_base_bdevs_operational": 3, 00:16:11.940 "base_bdevs_list": [ 00:16:11.940 { 00:16:11.940 "name": "spare", 00:16:11.940 "uuid": "a7735f0f-802c-550a-85c7-480a7cd75415", 00:16:11.940 "is_configured": true, 00:16:11.940 "data_offset": 0, 00:16:11.940 "data_size": 65536 00:16:11.940 }, 00:16:11.940 { 00:16:11.940 "name": null, 00:16:11.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.940 "is_configured": false, 00:16:11.940 "data_offset": 0, 00:16:11.940 "data_size": 65536 00:16:11.940 }, 00:16:11.940 { 00:16:11.940 "name": "BaseBdev3", 00:16:11.940 "uuid": "c63ec73b-be8b-5b7d-82dd-aef817672fe8", 00:16:11.940 "is_configured": true, 00:16:11.940 "data_offset": 0, 00:16:11.940 "data_size": 65536 00:16:11.940 }, 00:16:11.940 { 00:16:11.940 "name": "BaseBdev4", 00:16:11.940 "uuid": "a0643124-a73e-5c9f-9a68-092ac2ab9cae", 00:16:11.940 "is_configured": true, 00:16:11.940 "data_offset": 0, 00:16:11.940 "data_size": 65536 00:16:11.940 } 00:16:11.940 ] 00:16:11.940 }' 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.940 08:49:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.507 [2024-11-20 08:49:43.145147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.507 [2024-11-20 08:49:43.145364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.507 [2024-11-20 08:49:43.145568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.507 [2024-11-20 08:49:43.145685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.507 [2024-11-20 08:49:43.145703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.507 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.508 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:12.508 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.508 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.508 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:12.766 /dev/nbd0 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.766 1+0 records in 00:16:12.766 1+0 records out 00:16:12.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487303 s, 8.4 MB/s 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.766 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:13.026 /dev/nbd1 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.026 1+0 records in 00:16:13.026 1+0 records out 00:16:13.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364695 s, 11.2 MB/s 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.026 08:49:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.311 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.571 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77794 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77794 ']' 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77794 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77794 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.831 killing process with pid 77794 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77794' 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77794 00:16:13.831 Received shutdown signal, test time was about 60.000000 seconds 00:16:13.831 00:16:13.831 Latency(us) 00:16:13.831 [2024-11-20T08:49:44.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.831 [2024-11-20T08:49:44.747Z] =================================================================================================================== 00:16:13.831 [2024-11-20T08:49:44.747Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:13.831 [2024-11-20 08:49:44.741936] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.831 08:49:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77794 00:16:14.405 [2024-11-20 08:49:45.150013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.337 00:16:15.337 real 0m20.694s 00:16:15.337 user 0m23.240s 00:16:15.337 sys 0m3.572s 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.337 ************************************ 00:16:15.337 END TEST raid_rebuild_test 00:16:15.337 ************************************ 00:16:15.337 08:49:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:15.337 08:49:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:15.337 08:49:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.337 08:49:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.337 ************************************ 00:16:15.337 START TEST raid_rebuild_test_sb 00:16:15.337 ************************************ 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.337 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78275 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78275 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78275 ']' 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.338 08:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.595 [2024-11-20 08:49:46.300121] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:15.595 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:15.595 Zero copy mechanism will not be used. 00:16:15.595 [2024-11-20 08:49:46.300314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78275 ] 00:16:15.595 [2024-11-20 08:49:46.474293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.852 [2024-11-20 08:49:46.599641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.109 [2024-11-20 08:49:46.793373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.109 [2024-11-20 08:49:46.793456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 BaseBdev1_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 [2024-11-20 08:49:47.356726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:16.682 [2024-11-20 08:49:47.356839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.682 [2024-11-20 08:49:47.356872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:16.682 [2024-11-20 08:49:47.356892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.682 [2024-11-20 08:49:47.359705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.682 [2024-11-20 08:49:47.359768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.682 BaseBdev1 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 BaseBdev2_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 [2024-11-20 08:49:47.411220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:16.682 [2024-11-20 08:49:47.411307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.682 [2024-11-20 08:49:47.411335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:16.682 [2024-11-20 08:49:47.411354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.682 [2024-11-20 08:49:47.414081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.682 [2024-11-20 08:49:47.414158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:16.682 BaseBdev2 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 BaseBdev3_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 [2024-11-20 08:49:47.473981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:16.682 [2024-11-20 08:49:47.474048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.682 [2024-11-20 08:49:47.474079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.682 [2024-11-20 08:49:47.474097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.682 [2024-11-20 08:49:47.476831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.682 [2024-11-20 08:49:47.476884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:16.682 BaseBdev3 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 BaseBdev4_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 [2024-11-20 08:49:47.530089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:16.682 [2024-11-20 08:49:47.530185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.682 [2024-11-20 08:49:47.530214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:16.682 [2024-11-20 08:49:47.530232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.682 [2024-11-20 08:49:47.532915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.682 [2024-11-20 08:49:47.532964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:16.682 BaseBdev4 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 spare_malloc 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 spare_delay 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.682 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.682 [2024-11-20 08:49:47.589671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.682 [2024-11-20 08:49:47.589741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.683 [2024-11-20 08:49:47.589769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:16.683 [2024-11-20 08:49:47.589787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.683 [2024-11-20 08:49:47.592553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.683 [2024-11-20 08:49:47.592638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.683 spare 00:16:16.683 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.683 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:16.683 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.683 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.940 [2024-11-20 08:49:47.597725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.940 [2024-11-20 08:49:47.600142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.940 [2024-11-20 08:49:47.600282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.940 [2024-11-20 08:49:47.600367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:16.940 [2024-11-20 08:49:47.600609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:16.940 [2024-11-20 08:49:47.600644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.940 [2024-11-20 08:49:47.600953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:16.940 [2024-11-20 08:49:47.601199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:16.940 [2024-11-20 08:49:47.601217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:16.940 [2024-11-20 08:49:47.601401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.940 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.941 "name": "raid_bdev1", 00:16:16.941 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:16.941 "strip_size_kb": 0, 00:16:16.941 "state": "online", 00:16:16.941 "raid_level": "raid1", 00:16:16.941 "superblock": true, 00:16:16.941 "num_base_bdevs": 4, 00:16:16.941 "num_base_bdevs_discovered": 4, 00:16:16.941 "num_base_bdevs_operational": 4, 00:16:16.941 "base_bdevs_list": [ 00:16:16.941 { 00:16:16.941 "name": "BaseBdev1", 00:16:16.941 "uuid": "6e89db8a-6370-5b08-9d6a-205e7eac810d", 00:16:16.941 "is_configured": true, 00:16:16.941 "data_offset": 2048, 00:16:16.941 "data_size": 63488 00:16:16.941 }, 00:16:16.941 { 00:16:16.941 "name": "BaseBdev2", 00:16:16.941 "uuid": "c2142136-5bcc-59c8-b2eb-4d13670c34c7", 00:16:16.941 "is_configured": true, 00:16:16.941 "data_offset": 2048, 00:16:16.941 "data_size": 63488 00:16:16.941 }, 00:16:16.941 { 00:16:16.941 "name": "BaseBdev3", 00:16:16.941 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:16.941 "is_configured": true, 00:16:16.941 "data_offset": 2048, 00:16:16.941 "data_size": 63488 00:16:16.941 }, 00:16:16.941 { 00:16:16.941 "name": "BaseBdev4", 00:16:16.941 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:16.941 "is_configured": true, 00:16:16.941 "data_offset": 2048, 00:16:16.941 "data_size": 63488 00:16:16.941 } 00:16:16.941 ] 00:16:16.941 }' 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.941 08:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.508 [2024-11-20 08:49:48.122285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.508 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:17.767 [2024-11-20 08:49:48.506053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:17.767 /dev/nbd0 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.767 1+0 records in 00:16:17.767 1+0 records out 00:16:17.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410971 s, 10.0 MB/s 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:17.767 08:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:25.878 63488+0 records in 00:16:25.878 63488+0 records out 00:16:25.878 32505856 bytes (33 MB, 31 MiB) copied, 7.88305 s, 4.1 MB/s 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:25.878 [2024-11-20 08:49:56.721362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.878 [2024-11-20 08:49:56.753440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.878 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.879 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.879 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.879 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.879 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.879 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.879 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.137 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.137 "name": "raid_bdev1", 00:16:26.137 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:26.137 "strip_size_kb": 0, 00:16:26.137 "state": "online", 00:16:26.137 "raid_level": "raid1", 00:16:26.137 "superblock": true, 00:16:26.137 "num_base_bdevs": 4, 00:16:26.138 "num_base_bdevs_discovered": 3, 00:16:26.138 "num_base_bdevs_operational": 3, 00:16:26.138 "base_bdevs_list": [ 00:16:26.138 { 00:16:26.138 "name": null, 00:16:26.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.138 "is_configured": false, 00:16:26.138 "data_offset": 0, 00:16:26.138 "data_size": 63488 00:16:26.138 }, 00:16:26.138 { 00:16:26.138 "name": "BaseBdev2", 00:16:26.138 "uuid": "c2142136-5bcc-59c8-b2eb-4d13670c34c7", 00:16:26.138 "is_configured": true, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 }, 00:16:26.138 { 00:16:26.138 "name": "BaseBdev3", 00:16:26.138 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:26.138 "is_configured": true, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 }, 00:16:26.138 { 00:16:26.138 "name": "BaseBdev4", 00:16:26.138 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:26.138 "is_configured": true, 00:16:26.138 "data_offset": 2048, 00:16:26.138 "data_size": 63488 00:16:26.138 } 00:16:26.138 ] 00:16:26.138 }' 00:16:26.138 08:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.138 08:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.396 08:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.396 08:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.396 08:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.396 [2024-11-20 08:49:57.261585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.396 [2024-11-20 08:49:57.275953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:26.396 08:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.396 08:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:26.396 [2024-11-20 08:49:57.278410] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.772 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.772 "name": "raid_bdev1", 00:16:27.772 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:27.772 "strip_size_kb": 0, 00:16:27.772 "state": "online", 00:16:27.772 "raid_level": "raid1", 00:16:27.772 "superblock": true, 00:16:27.772 "num_base_bdevs": 4, 00:16:27.772 "num_base_bdevs_discovered": 4, 00:16:27.772 "num_base_bdevs_operational": 4, 00:16:27.772 "process": { 00:16:27.772 "type": "rebuild", 00:16:27.772 "target": "spare", 00:16:27.772 "progress": { 00:16:27.772 "blocks": 20480, 00:16:27.772 "percent": 32 00:16:27.772 } 00:16:27.772 }, 00:16:27.772 "base_bdevs_list": [ 00:16:27.772 { 00:16:27.773 "name": "spare", 00:16:27.773 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": "BaseBdev2", 00:16:27.773 "uuid": "c2142136-5bcc-59c8-b2eb-4d13670c34c7", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": "BaseBdev3", 00:16:27.773 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": "BaseBdev4", 00:16:27.773 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 } 00:16:27.773 ] 00:16:27.773 }' 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 [2024-11-20 08:49:58.451606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.773 [2024-11-20 08:49:58.486775] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.773 [2024-11-20 08:49:58.486879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.773 [2024-11-20 08:49:58.486903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.773 [2024-11-20 08:49:58.486916] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.773 "name": "raid_bdev1", 00:16:27.773 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:27.773 "strip_size_kb": 0, 00:16:27.773 "state": "online", 00:16:27.773 "raid_level": "raid1", 00:16:27.773 "superblock": true, 00:16:27.773 "num_base_bdevs": 4, 00:16:27.773 "num_base_bdevs_discovered": 3, 00:16:27.773 "num_base_bdevs_operational": 3, 00:16:27.773 "base_bdevs_list": [ 00:16:27.773 { 00:16:27.773 "name": null, 00:16:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.773 "is_configured": false, 00:16:27.773 "data_offset": 0, 00:16:27.773 "data_size": 63488 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": "BaseBdev2", 00:16:27.773 "uuid": "c2142136-5bcc-59c8-b2eb-4d13670c34c7", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": "BaseBdev3", 00:16:27.773 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 }, 00:16:27.773 { 00:16:27.773 "name": "BaseBdev4", 00:16:27.773 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:27.773 "is_configured": true, 00:16:27.773 "data_offset": 2048, 00:16:27.773 "data_size": 63488 00:16:27.773 } 00:16:27.773 ] 00:16:27.773 }' 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.773 08:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.340 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.340 "name": "raid_bdev1", 00:16:28.340 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:28.340 "strip_size_kb": 0, 00:16:28.340 "state": "online", 00:16:28.340 "raid_level": "raid1", 00:16:28.340 "superblock": true, 00:16:28.340 "num_base_bdevs": 4, 00:16:28.340 "num_base_bdevs_discovered": 3, 00:16:28.340 "num_base_bdevs_operational": 3, 00:16:28.340 "base_bdevs_list": [ 00:16:28.340 { 00:16:28.340 "name": null, 00:16:28.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.340 "is_configured": false, 00:16:28.341 "data_offset": 0, 00:16:28.341 "data_size": 63488 00:16:28.341 }, 00:16:28.341 { 00:16:28.341 "name": "BaseBdev2", 00:16:28.341 "uuid": "c2142136-5bcc-59c8-b2eb-4d13670c34c7", 00:16:28.341 "is_configured": true, 00:16:28.341 "data_offset": 2048, 00:16:28.341 "data_size": 63488 00:16:28.341 }, 00:16:28.341 { 00:16:28.341 "name": "BaseBdev3", 00:16:28.341 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:28.341 "is_configured": true, 00:16:28.341 "data_offset": 2048, 00:16:28.341 "data_size": 63488 00:16:28.341 }, 00:16:28.341 { 00:16:28.341 "name": "BaseBdev4", 00:16:28.341 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:28.341 "is_configured": true, 00:16:28.341 "data_offset": 2048, 00:16:28.341 "data_size": 63488 00:16:28.341 } 00:16:28.341 ] 00:16:28.341 }' 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.341 [2024-11-20 08:49:59.193500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.341 [2024-11-20 08:49:59.206817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.341 08:49:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:28.341 [2024-11-20 08:49:59.209309] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.717 "name": "raid_bdev1", 00:16:29.717 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:29.717 "strip_size_kb": 0, 00:16:29.717 "state": "online", 00:16:29.717 "raid_level": "raid1", 00:16:29.717 "superblock": true, 00:16:29.717 "num_base_bdevs": 4, 00:16:29.717 "num_base_bdevs_discovered": 4, 00:16:29.717 "num_base_bdevs_operational": 4, 00:16:29.717 "process": { 00:16:29.717 "type": "rebuild", 00:16:29.717 "target": "spare", 00:16:29.717 "progress": { 00:16:29.717 "blocks": 20480, 00:16:29.717 "percent": 32 00:16:29.717 } 00:16:29.717 }, 00:16:29.717 "base_bdevs_list": [ 00:16:29.717 { 00:16:29.717 "name": "spare", 00:16:29.717 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:29.717 "is_configured": true, 00:16:29.717 "data_offset": 2048, 00:16:29.717 "data_size": 63488 00:16:29.717 }, 00:16:29.717 { 00:16:29.717 "name": "BaseBdev2", 00:16:29.717 "uuid": "c2142136-5bcc-59c8-b2eb-4d13670c34c7", 00:16:29.717 "is_configured": true, 00:16:29.717 "data_offset": 2048, 00:16:29.717 "data_size": 63488 00:16:29.717 }, 00:16:29.717 { 00:16:29.717 "name": "BaseBdev3", 00:16:29.717 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:29.717 "is_configured": true, 00:16:29.717 "data_offset": 2048, 00:16:29.717 "data_size": 63488 00:16:29.717 }, 00:16:29.717 { 00:16:29.717 "name": "BaseBdev4", 00:16:29.717 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:29.717 "is_configured": true, 00:16:29.717 "data_offset": 2048, 00:16:29.717 "data_size": 63488 00:16:29.717 } 00:16:29.717 ] 00:16:29.717 }' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:29.717 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.717 [2024-11-20 08:50:00.374467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.717 [2024-11-20 08:50:00.518086] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.717 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.718 "name": "raid_bdev1", 00:16:29.718 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:29.718 "strip_size_kb": 0, 00:16:29.718 "state": "online", 00:16:29.718 "raid_level": "raid1", 00:16:29.718 "superblock": true, 00:16:29.718 "num_base_bdevs": 4, 00:16:29.718 "num_base_bdevs_discovered": 3, 00:16:29.718 "num_base_bdevs_operational": 3, 00:16:29.718 "process": { 00:16:29.718 "type": "rebuild", 00:16:29.718 "target": "spare", 00:16:29.718 "progress": { 00:16:29.718 "blocks": 24576, 00:16:29.718 "percent": 38 00:16:29.718 } 00:16:29.718 }, 00:16:29.718 "base_bdevs_list": [ 00:16:29.718 { 00:16:29.718 "name": "spare", 00:16:29.718 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:29.718 "is_configured": true, 00:16:29.718 "data_offset": 2048, 00:16:29.718 "data_size": 63488 00:16:29.718 }, 00:16:29.718 { 00:16:29.718 "name": null, 00:16:29.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.718 "is_configured": false, 00:16:29.718 "data_offset": 0, 00:16:29.718 "data_size": 63488 00:16:29.718 }, 00:16:29.718 { 00:16:29.718 "name": "BaseBdev3", 00:16:29.718 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:29.718 "is_configured": true, 00:16:29.718 "data_offset": 2048, 00:16:29.718 "data_size": 63488 00:16:29.718 }, 00:16:29.718 { 00:16:29.718 "name": "BaseBdev4", 00:16:29.718 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:29.718 "is_configured": true, 00:16:29.718 "data_offset": 2048, 00:16:29.718 "data_size": 63488 00:16:29.718 } 00:16:29.718 ] 00:16:29.718 }' 00:16:29.718 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.977 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.978 "name": "raid_bdev1", 00:16:29.978 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:29.978 "strip_size_kb": 0, 00:16:29.978 "state": "online", 00:16:29.978 "raid_level": "raid1", 00:16:29.978 "superblock": true, 00:16:29.978 "num_base_bdevs": 4, 00:16:29.978 "num_base_bdevs_discovered": 3, 00:16:29.978 "num_base_bdevs_operational": 3, 00:16:29.978 "process": { 00:16:29.978 "type": "rebuild", 00:16:29.978 "target": "spare", 00:16:29.978 "progress": { 00:16:29.978 "blocks": 26624, 00:16:29.978 "percent": 41 00:16:29.978 } 00:16:29.978 }, 00:16:29.978 "base_bdevs_list": [ 00:16:29.978 { 00:16:29.978 "name": "spare", 00:16:29.978 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:29.978 "is_configured": true, 00:16:29.978 "data_offset": 2048, 00:16:29.978 "data_size": 63488 00:16:29.978 }, 00:16:29.978 { 00:16:29.978 "name": null, 00:16:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.978 "is_configured": false, 00:16:29.978 "data_offset": 0, 00:16:29.978 "data_size": 63488 00:16:29.978 }, 00:16:29.978 { 00:16:29.978 "name": "BaseBdev3", 00:16:29.978 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:29.978 "is_configured": true, 00:16:29.978 "data_offset": 2048, 00:16:29.978 "data_size": 63488 00:16:29.978 }, 00:16:29.978 { 00:16:29.978 "name": "BaseBdev4", 00:16:29.978 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:29.978 "is_configured": true, 00:16:29.978 "data_offset": 2048, 00:16:29.978 "data_size": 63488 00:16:29.978 } 00:16:29.978 ] 00:16:29.978 }' 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.978 08:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.358 "name": "raid_bdev1", 00:16:31.358 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:31.358 "strip_size_kb": 0, 00:16:31.358 "state": "online", 00:16:31.358 "raid_level": "raid1", 00:16:31.358 "superblock": true, 00:16:31.358 "num_base_bdevs": 4, 00:16:31.358 "num_base_bdevs_discovered": 3, 00:16:31.358 "num_base_bdevs_operational": 3, 00:16:31.358 "process": { 00:16:31.358 "type": "rebuild", 00:16:31.358 "target": "spare", 00:16:31.358 "progress": { 00:16:31.358 "blocks": 51200, 00:16:31.358 "percent": 80 00:16:31.358 } 00:16:31.358 }, 00:16:31.358 "base_bdevs_list": [ 00:16:31.358 { 00:16:31.358 "name": "spare", 00:16:31.358 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:31.358 "is_configured": true, 00:16:31.358 "data_offset": 2048, 00:16:31.358 "data_size": 63488 00:16:31.358 }, 00:16:31.358 { 00:16:31.358 "name": null, 00:16:31.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.358 "is_configured": false, 00:16:31.358 "data_offset": 0, 00:16:31.358 "data_size": 63488 00:16:31.358 }, 00:16:31.358 { 00:16:31.358 "name": "BaseBdev3", 00:16:31.358 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:31.358 "is_configured": true, 00:16:31.358 "data_offset": 2048, 00:16:31.358 "data_size": 63488 00:16:31.358 }, 00:16:31.358 { 00:16:31.358 "name": "BaseBdev4", 00:16:31.358 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:31.358 "is_configured": true, 00:16:31.358 "data_offset": 2048, 00:16:31.358 "data_size": 63488 00:16:31.358 } 00:16:31.358 ] 00:16:31.358 }' 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.358 08:50:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.358 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.358 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.617 [2024-11-20 08:50:02.431037] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:31.617 [2024-11-20 08:50:02.431139] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:31.617 [2024-11-20 08:50:02.431301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.184 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.185 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.185 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.185 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.185 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.185 "name": "raid_bdev1", 00:16:32.185 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:32.185 "strip_size_kb": 0, 00:16:32.185 "state": "online", 00:16:32.185 "raid_level": "raid1", 00:16:32.185 "superblock": true, 00:16:32.185 "num_base_bdevs": 4, 00:16:32.185 "num_base_bdevs_discovered": 3, 00:16:32.185 "num_base_bdevs_operational": 3, 00:16:32.185 "base_bdevs_list": [ 00:16:32.185 { 00:16:32.185 "name": "spare", 00:16:32.185 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:32.185 "is_configured": true, 00:16:32.185 "data_offset": 2048, 00:16:32.185 "data_size": 63488 00:16:32.185 }, 00:16:32.185 { 00:16:32.185 "name": null, 00:16:32.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.185 "is_configured": false, 00:16:32.185 "data_offset": 0, 00:16:32.185 "data_size": 63488 00:16:32.185 }, 00:16:32.185 { 00:16:32.185 "name": "BaseBdev3", 00:16:32.185 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:32.185 "is_configured": true, 00:16:32.185 "data_offset": 2048, 00:16:32.185 "data_size": 63488 00:16:32.185 }, 00:16:32.185 { 00:16:32.185 "name": "BaseBdev4", 00:16:32.185 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:32.185 "is_configured": true, 00:16:32.185 "data_offset": 2048, 00:16:32.185 "data_size": 63488 00:16:32.185 } 00:16:32.185 ] 00:16:32.185 }' 00:16:32.185 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.444 "name": "raid_bdev1", 00:16:32.444 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:32.444 "strip_size_kb": 0, 00:16:32.444 "state": "online", 00:16:32.444 "raid_level": "raid1", 00:16:32.444 "superblock": true, 00:16:32.444 "num_base_bdevs": 4, 00:16:32.444 "num_base_bdevs_discovered": 3, 00:16:32.444 "num_base_bdevs_operational": 3, 00:16:32.444 "base_bdevs_list": [ 00:16:32.444 { 00:16:32.444 "name": "spare", 00:16:32.444 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:32.444 "is_configured": true, 00:16:32.444 "data_offset": 2048, 00:16:32.444 "data_size": 63488 00:16:32.444 }, 00:16:32.444 { 00:16:32.444 "name": null, 00:16:32.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.444 "is_configured": false, 00:16:32.444 "data_offset": 0, 00:16:32.444 "data_size": 63488 00:16:32.444 }, 00:16:32.444 { 00:16:32.444 "name": "BaseBdev3", 00:16:32.444 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:32.444 "is_configured": true, 00:16:32.444 "data_offset": 2048, 00:16:32.444 "data_size": 63488 00:16:32.444 }, 00:16:32.444 { 00:16:32.444 "name": "BaseBdev4", 00:16:32.444 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:32.444 "is_configured": true, 00:16:32.444 "data_offset": 2048, 00:16:32.444 "data_size": 63488 00:16:32.444 } 00:16:32.444 ] 00:16:32.444 }' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.444 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.703 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.703 "name": "raid_bdev1", 00:16:32.703 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:32.703 "strip_size_kb": 0, 00:16:32.703 "state": "online", 00:16:32.703 "raid_level": "raid1", 00:16:32.703 "superblock": true, 00:16:32.703 "num_base_bdevs": 4, 00:16:32.703 "num_base_bdevs_discovered": 3, 00:16:32.703 "num_base_bdevs_operational": 3, 00:16:32.703 "base_bdevs_list": [ 00:16:32.703 { 00:16:32.703 "name": "spare", 00:16:32.703 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 }, 00:16:32.703 { 00:16:32.703 "name": null, 00:16:32.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.703 "is_configured": false, 00:16:32.703 "data_offset": 0, 00:16:32.703 "data_size": 63488 00:16:32.703 }, 00:16:32.703 { 00:16:32.703 "name": "BaseBdev3", 00:16:32.703 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 }, 00:16:32.703 { 00:16:32.703 "name": "BaseBdev4", 00:16:32.703 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:32.703 "is_configured": true, 00:16:32.703 "data_offset": 2048, 00:16:32.703 "data_size": 63488 00:16:32.703 } 00:16:32.703 ] 00:16:32.703 }' 00:16:32.703 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.703 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.963 [2024-11-20 08:50:03.850950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.963 [2024-11-20 08:50:03.850993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.963 [2024-11-20 08:50:03.851095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.963 [2024-11-20 08:50:03.851228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.963 [2024-11-20 08:50:03.851248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:32.963 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.222 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:33.481 /dev/nbd0 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.481 1+0 records in 00:16:33.481 1+0 records out 00:16:33.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351683 s, 11.6 MB/s 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.481 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:33.740 /dev/nbd1 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.740 1+0 records in 00:16:33.740 1+0 records out 00:16:33.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426798 s, 9.6 MB/s 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.740 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.021 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.289 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:34.547 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 [2024-11-20 08:50:05.273882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.548 [2024-11-20 08:50:05.273961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.548 [2024-11-20 08:50:05.273994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:34.548 [2024-11-20 08:50:05.274009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.548 [2024-11-20 08:50:05.276922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.548 [2024-11-20 08:50:05.276965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.548 [2024-11-20 08:50:05.277095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.548 [2024-11-20 08:50:05.277187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.548 [2024-11-20 08:50:05.277369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.548 [2024-11-20 08:50:05.277499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.548 spare 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 [2024-11-20 08:50:05.377615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:34.548 [2024-11-20 08:50:05.377645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:34.548 [2024-11-20 08:50:05.377999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:34.548 [2024-11-20 08:50:05.378254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:34.548 [2024-11-20 08:50:05.378330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:34.548 [2024-11-20 08:50:05.378557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.548 "name": "raid_bdev1", 00:16:34.548 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:34.548 "strip_size_kb": 0, 00:16:34.548 "state": "online", 00:16:34.548 "raid_level": "raid1", 00:16:34.548 "superblock": true, 00:16:34.548 "num_base_bdevs": 4, 00:16:34.548 "num_base_bdevs_discovered": 3, 00:16:34.548 "num_base_bdevs_operational": 3, 00:16:34.548 "base_bdevs_list": [ 00:16:34.548 { 00:16:34.548 "name": "spare", 00:16:34.548 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:34.548 "is_configured": true, 00:16:34.548 "data_offset": 2048, 00:16:34.548 "data_size": 63488 00:16:34.548 }, 00:16:34.548 { 00:16:34.548 "name": null, 00:16:34.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.548 "is_configured": false, 00:16:34.548 "data_offset": 2048, 00:16:34.548 "data_size": 63488 00:16:34.548 }, 00:16:34.548 { 00:16:34.548 "name": "BaseBdev3", 00:16:34.548 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:34.548 "is_configured": true, 00:16:34.548 "data_offset": 2048, 00:16:34.548 "data_size": 63488 00:16:34.548 }, 00:16:34.548 { 00:16:34.548 "name": "BaseBdev4", 00:16:34.548 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:34.548 "is_configured": true, 00:16:34.548 "data_offset": 2048, 00:16:34.548 "data_size": 63488 00:16:34.548 } 00:16:34.548 ] 00:16:34.548 }' 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.548 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.116 "name": "raid_bdev1", 00:16:35.116 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:35.116 "strip_size_kb": 0, 00:16:35.116 "state": "online", 00:16:35.116 "raid_level": "raid1", 00:16:35.116 "superblock": true, 00:16:35.116 "num_base_bdevs": 4, 00:16:35.116 "num_base_bdevs_discovered": 3, 00:16:35.116 "num_base_bdevs_operational": 3, 00:16:35.116 "base_bdevs_list": [ 00:16:35.116 { 00:16:35.116 "name": "spare", 00:16:35.116 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:35.116 "is_configured": true, 00:16:35.116 "data_offset": 2048, 00:16:35.116 "data_size": 63488 00:16:35.116 }, 00:16:35.116 { 00:16:35.116 "name": null, 00:16:35.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.116 "is_configured": false, 00:16:35.116 "data_offset": 2048, 00:16:35.116 "data_size": 63488 00:16:35.116 }, 00:16:35.116 { 00:16:35.116 "name": "BaseBdev3", 00:16:35.116 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:35.116 "is_configured": true, 00:16:35.116 "data_offset": 2048, 00:16:35.116 "data_size": 63488 00:16:35.116 }, 00:16:35.116 { 00:16:35.116 "name": "BaseBdev4", 00:16:35.116 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:35.116 "is_configured": true, 00:16:35.116 "data_offset": 2048, 00:16:35.116 "data_size": 63488 00:16:35.116 } 00:16:35.116 ] 00:16:35.116 }' 00:16:35.116 08:50:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.116 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.116 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.376 [2024-11-20 08:50:06.106716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.376 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.376 "name": "raid_bdev1", 00:16:35.376 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:35.376 "strip_size_kb": 0, 00:16:35.376 "state": "online", 00:16:35.376 "raid_level": "raid1", 00:16:35.376 "superblock": true, 00:16:35.376 "num_base_bdevs": 4, 00:16:35.376 "num_base_bdevs_discovered": 2, 00:16:35.376 "num_base_bdevs_operational": 2, 00:16:35.376 "base_bdevs_list": [ 00:16:35.376 { 00:16:35.376 "name": null, 00:16:35.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.376 "is_configured": false, 00:16:35.376 "data_offset": 0, 00:16:35.376 "data_size": 63488 00:16:35.376 }, 00:16:35.376 { 00:16:35.376 "name": null, 00:16:35.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.376 "is_configured": false, 00:16:35.376 "data_offset": 2048, 00:16:35.376 "data_size": 63488 00:16:35.376 }, 00:16:35.376 { 00:16:35.376 "name": "BaseBdev3", 00:16:35.376 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:35.376 "is_configured": true, 00:16:35.376 "data_offset": 2048, 00:16:35.376 "data_size": 63488 00:16:35.376 }, 00:16:35.376 { 00:16:35.376 "name": "BaseBdev4", 00:16:35.377 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:35.377 "is_configured": true, 00:16:35.377 "data_offset": 2048, 00:16:35.377 "data_size": 63488 00:16:35.377 } 00:16:35.377 ] 00:16:35.377 }' 00:16:35.377 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.377 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.944 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.944 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.944 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.944 [2024-11-20 08:50:06.646891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.944 [2024-11-20 08:50:06.647117] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:35.944 [2024-11-20 08:50:06.647141] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.944 [2024-11-20 08:50:06.647217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.944 [2024-11-20 08:50:06.660444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:35.944 08:50:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.944 08:50:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:35.944 [2024-11-20 08:50:06.662850] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.880 "name": "raid_bdev1", 00:16:36.880 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:36.880 "strip_size_kb": 0, 00:16:36.880 "state": "online", 00:16:36.880 "raid_level": "raid1", 00:16:36.880 "superblock": true, 00:16:36.880 "num_base_bdevs": 4, 00:16:36.880 "num_base_bdevs_discovered": 3, 00:16:36.880 "num_base_bdevs_operational": 3, 00:16:36.880 "process": { 00:16:36.880 "type": "rebuild", 00:16:36.880 "target": "spare", 00:16:36.880 "progress": { 00:16:36.880 "blocks": 20480, 00:16:36.880 "percent": 32 00:16:36.880 } 00:16:36.880 }, 00:16:36.880 "base_bdevs_list": [ 00:16:36.880 { 00:16:36.880 "name": "spare", 00:16:36.880 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:36.880 "is_configured": true, 00:16:36.880 "data_offset": 2048, 00:16:36.880 "data_size": 63488 00:16:36.880 }, 00:16:36.880 { 00:16:36.880 "name": null, 00:16:36.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.880 "is_configured": false, 00:16:36.880 "data_offset": 2048, 00:16:36.880 "data_size": 63488 00:16:36.880 }, 00:16:36.880 { 00:16:36.880 "name": "BaseBdev3", 00:16:36.880 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:36.880 "is_configured": true, 00:16:36.880 "data_offset": 2048, 00:16:36.880 "data_size": 63488 00:16:36.880 }, 00:16:36.880 { 00:16:36.880 "name": "BaseBdev4", 00:16:36.880 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:36.880 "is_configured": true, 00:16:36.880 "data_offset": 2048, 00:16:36.880 "data_size": 63488 00:16:36.880 } 00:16:36.880 ] 00:16:36.880 }' 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.880 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.139 [2024-11-20 08:50:07.828029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.139 [2024-11-20 08:50:07.870900] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.139 [2024-11-20 08:50:07.871112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.139 [2024-11-20 08:50:07.871168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.139 [2024-11-20 08:50:07.871185] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.139 "name": "raid_bdev1", 00:16:37.139 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:37.139 "strip_size_kb": 0, 00:16:37.139 "state": "online", 00:16:37.139 "raid_level": "raid1", 00:16:37.139 "superblock": true, 00:16:37.139 "num_base_bdevs": 4, 00:16:37.139 "num_base_bdevs_discovered": 2, 00:16:37.139 "num_base_bdevs_operational": 2, 00:16:37.139 "base_bdevs_list": [ 00:16:37.139 { 00:16:37.139 "name": null, 00:16:37.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.139 "is_configured": false, 00:16:37.139 "data_offset": 0, 00:16:37.139 "data_size": 63488 00:16:37.139 }, 00:16:37.139 { 00:16:37.139 "name": null, 00:16:37.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.139 "is_configured": false, 00:16:37.139 "data_offset": 2048, 00:16:37.139 "data_size": 63488 00:16:37.139 }, 00:16:37.139 { 00:16:37.139 "name": "BaseBdev3", 00:16:37.139 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:37.139 "is_configured": true, 00:16:37.139 "data_offset": 2048, 00:16:37.139 "data_size": 63488 00:16:37.139 }, 00:16:37.139 { 00:16:37.139 "name": "BaseBdev4", 00:16:37.139 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:37.139 "is_configured": true, 00:16:37.139 "data_offset": 2048, 00:16:37.139 "data_size": 63488 00:16:37.139 } 00:16:37.139 ] 00:16:37.139 }' 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.139 08:50:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.707 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.707 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.707 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.707 [2024-11-20 08:50:08.414644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.707 [2024-11-20 08:50:08.414717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.707 [2024-11-20 08:50:08.414757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:37.707 [2024-11-20 08:50:08.414773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.707 [2024-11-20 08:50:08.415382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.707 [2024-11-20 08:50:08.415426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.707 [2024-11-20 08:50:08.415550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.707 [2024-11-20 08:50:08.415571] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:37.707 [2024-11-20 08:50:08.415590] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.707 [2024-11-20 08:50:08.415628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.707 [2024-11-20 08:50:08.428670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:37.707 spare 00:16:37.707 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.707 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:37.707 [2024-11-20 08:50:08.431081] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.643 "name": "raid_bdev1", 00:16:38.643 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:38.643 "strip_size_kb": 0, 00:16:38.643 "state": "online", 00:16:38.643 "raid_level": "raid1", 00:16:38.643 "superblock": true, 00:16:38.643 "num_base_bdevs": 4, 00:16:38.643 "num_base_bdevs_discovered": 3, 00:16:38.643 "num_base_bdevs_operational": 3, 00:16:38.643 "process": { 00:16:38.643 "type": "rebuild", 00:16:38.643 "target": "spare", 00:16:38.643 "progress": { 00:16:38.643 "blocks": 20480, 00:16:38.643 "percent": 32 00:16:38.643 } 00:16:38.643 }, 00:16:38.643 "base_bdevs_list": [ 00:16:38.643 { 00:16:38.643 "name": "spare", 00:16:38.643 "uuid": "ab293204-6535-5435-b2ee-ea3e8d841ee4", 00:16:38.643 "is_configured": true, 00:16:38.643 "data_offset": 2048, 00:16:38.643 "data_size": 63488 00:16:38.643 }, 00:16:38.643 { 00:16:38.643 "name": null, 00:16:38.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.643 "is_configured": false, 00:16:38.643 "data_offset": 2048, 00:16:38.643 "data_size": 63488 00:16:38.643 }, 00:16:38.643 { 00:16:38.643 "name": "BaseBdev3", 00:16:38.643 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:38.643 "is_configured": true, 00:16:38.643 "data_offset": 2048, 00:16:38.643 "data_size": 63488 00:16:38.643 }, 00:16:38.643 { 00:16:38.643 "name": "BaseBdev4", 00:16:38.643 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:38.643 "is_configured": true, 00:16:38.643 "data_offset": 2048, 00:16:38.643 "data_size": 63488 00:16:38.643 } 00:16:38.643 ] 00:16:38.643 }' 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.643 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.902 [2024-11-20 08:50:09.604793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.902 [2024-11-20 08:50:09.639997] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.902 [2024-11-20 08:50:09.640309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.902 [2024-11-20 08:50:09.640585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.902 [2024-11-20 08:50:09.640651] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.902 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.903 "name": "raid_bdev1", 00:16:38.903 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:38.903 "strip_size_kb": 0, 00:16:38.903 "state": "online", 00:16:38.903 "raid_level": "raid1", 00:16:38.903 "superblock": true, 00:16:38.903 "num_base_bdevs": 4, 00:16:38.903 "num_base_bdevs_discovered": 2, 00:16:38.903 "num_base_bdevs_operational": 2, 00:16:38.903 "base_bdevs_list": [ 00:16:38.903 { 00:16:38.903 "name": null, 00:16:38.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.903 "is_configured": false, 00:16:38.903 "data_offset": 0, 00:16:38.903 "data_size": 63488 00:16:38.903 }, 00:16:38.903 { 00:16:38.903 "name": null, 00:16:38.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.903 "is_configured": false, 00:16:38.903 "data_offset": 2048, 00:16:38.903 "data_size": 63488 00:16:38.903 }, 00:16:38.903 { 00:16:38.903 "name": "BaseBdev3", 00:16:38.903 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:38.903 "is_configured": true, 00:16:38.903 "data_offset": 2048, 00:16:38.903 "data_size": 63488 00:16:38.903 }, 00:16:38.903 { 00:16:38.903 "name": "BaseBdev4", 00:16:38.903 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:38.903 "is_configured": true, 00:16:38.903 "data_offset": 2048, 00:16:38.903 "data_size": 63488 00:16:38.903 } 00:16:38.903 ] 00:16:38.903 }' 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.903 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.471 "name": "raid_bdev1", 00:16:39.471 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:39.471 "strip_size_kb": 0, 00:16:39.471 "state": "online", 00:16:39.471 "raid_level": "raid1", 00:16:39.471 "superblock": true, 00:16:39.471 "num_base_bdevs": 4, 00:16:39.471 "num_base_bdevs_discovered": 2, 00:16:39.471 "num_base_bdevs_operational": 2, 00:16:39.471 "base_bdevs_list": [ 00:16:39.471 { 00:16:39.471 "name": null, 00:16:39.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.471 "is_configured": false, 00:16:39.471 "data_offset": 0, 00:16:39.471 "data_size": 63488 00:16:39.471 }, 00:16:39.471 { 00:16:39.471 "name": null, 00:16:39.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.471 "is_configured": false, 00:16:39.471 "data_offset": 2048, 00:16:39.471 "data_size": 63488 00:16:39.471 }, 00:16:39.471 { 00:16:39.471 "name": "BaseBdev3", 00:16:39.471 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:39.471 "is_configured": true, 00:16:39.471 "data_offset": 2048, 00:16:39.471 "data_size": 63488 00:16:39.471 }, 00:16:39.471 { 00:16:39.471 "name": "BaseBdev4", 00:16:39.471 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:39.471 "is_configured": true, 00:16:39.471 "data_offset": 2048, 00:16:39.471 "data_size": 63488 00:16:39.471 } 00:16:39.471 ] 00:16:39.471 }' 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.471 [2024-11-20 08:50:10.348097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.471 [2024-11-20 08:50:10.348216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.471 [2024-11-20 08:50:10.348246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:39.471 [2024-11-20 08:50:10.348264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.471 [2024-11-20 08:50:10.348824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.471 [2024-11-20 08:50:10.348861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.471 [2024-11-20 08:50:10.348959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:39.471 [2024-11-20 08:50:10.348984] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:39.471 [2024-11-20 08:50:10.348996] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.471 [2024-11-20 08:50:10.349025] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:39.471 BaseBdev1 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.471 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.847 "name": "raid_bdev1", 00:16:40.847 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:40.847 "strip_size_kb": 0, 00:16:40.847 "state": "online", 00:16:40.847 "raid_level": "raid1", 00:16:40.847 "superblock": true, 00:16:40.847 "num_base_bdevs": 4, 00:16:40.847 "num_base_bdevs_discovered": 2, 00:16:40.847 "num_base_bdevs_operational": 2, 00:16:40.847 "base_bdevs_list": [ 00:16:40.847 { 00:16:40.847 "name": null, 00:16:40.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.847 "is_configured": false, 00:16:40.847 "data_offset": 0, 00:16:40.847 "data_size": 63488 00:16:40.847 }, 00:16:40.847 { 00:16:40.847 "name": null, 00:16:40.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.847 "is_configured": false, 00:16:40.847 "data_offset": 2048, 00:16:40.847 "data_size": 63488 00:16:40.847 }, 00:16:40.847 { 00:16:40.847 "name": "BaseBdev3", 00:16:40.847 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:40.847 "is_configured": true, 00:16:40.847 "data_offset": 2048, 00:16:40.847 "data_size": 63488 00:16:40.847 }, 00:16:40.847 { 00:16:40.847 "name": "BaseBdev4", 00:16:40.847 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:40.847 "is_configured": true, 00:16:40.847 "data_offset": 2048, 00:16:40.847 "data_size": 63488 00:16:40.847 } 00:16:40.847 ] 00:16:40.847 }' 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.847 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.107 "name": "raid_bdev1", 00:16:41.107 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:41.107 "strip_size_kb": 0, 00:16:41.107 "state": "online", 00:16:41.107 "raid_level": "raid1", 00:16:41.107 "superblock": true, 00:16:41.107 "num_base_bdevs": 4, 00:16:41.107 "num_base_bdevs_discovered": 2, 00:16:41.107 "num_base_bdevs_operational": 2, 00:16:41.107 "base_bdevs_list": [ 00:16:41.107 { 00:16:41.107 "name": null, 00:16:41.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.107 "is_configured": false, 00:16:41.107 "data_offset": 0, 00:16:41.107 "data_size": 63488 00:16:41.107 }, 00:16:41.107 { 00:16:41.107 "name": null, 00:16:41.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.107 "is_configured": false, 00:16:41.107 "data_offset": 2048, 00:16:41.107 "data_size": 63488 00:16:41.107 }, 00:16:41.107 { 00:16:41.107 "name": "BaseBdev3", 00:16:41.107 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:41.107 "is_configured": true, 00:16:41.107 "data_offset": 2048, 00:16:41.107 "data_size": 63488 00:16:41.107 }, 00:16:41.107 { 00:16:41.107 "name": "BaseBdev4", 00:16:41.107 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:41.107 "is_configured": true, 00:16:41.107 "data_offset": 2048, 00:16:41.107 "data_size": 63488 00:16:41.107 } 00:16:41.107 ] 00:16:41.107 }' 00:16:41.107 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.107 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.107 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.366 [2024-11-20 08:50:12.068679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.366 [2024-11-20 08:50:12.068921] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:41.366 [2024-11-20 08:50:12.068941] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:41.366 request: 00:16:41.366 { 00:16:41.366 "base_bdev": "BaseBdev1", 00:16:41.366 "raid_bdev": "raid_bdev1", 00:16:41.366 "method": "bdev_raid_add_base_bdev", 00:16:41.366 "req_id": 1 00:16:41.366 } 00:16:41.366 Got JSON-RPC error response 00:16:41.366 response: 00:16:41.366 { 00:16:41.366 "code": -22, 00:16:41.366 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:41.366 } 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.366 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.302 "name": "raid_bdev1", 00:16:42.302 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:42.302 "strip_size_kb": 0, 00:16:42.302 "state": "online", 00:16:42.302 "raid_level": "raid1", 00:16:42.302 "superblock": true, 00:16:42.302 "num_base_bdevs": 4, 00:16:42.302 "num_base_bdevs_discovered": 2, 00:16:42.302 "num_base_bdevs_operational": 2, 00:16:42.302 "base_bdevs_list": [ 00:16:42.302 { 00:16:42.302 "name": null, 00:16:42.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.302 "is_configured": false, 00:16:42.302 "data_offset": 0, 00:16:42.302 "data_size": 63488 00:16:42.302 }, 00:16:42.302 { 00:16:42.302 "name": null, 00:16:42.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.302 "is_configured": false, 00:16:42.302 "data_offset": 2048, 00:16:42.302 "data_size": 63488 00:16:42.302 }, 00:16:42.302 { 00:16:42.302 "name": "BaseBdev3", 00:16:42.302 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:42.302 "is_configured": true, 00:16:42.302 "data_offset": 2048, 00:16:42.302 "data_size": 63488 00:16:42.302 }, 00:16:42.302 { 00:16:42.302 "name": "BaseBdev4", 00:16:42.302 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:42.302 "is_configured": true, 00:16:42.302 "data_offset": 2048, 00:16:42.302 "data_size": 63488 00:16:42.302 } 00:16:42.302 ] 00:16:42.302 }' 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.302 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.869 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.869 "name": "raid_bdev1", 00:16:42.869 "uuid": "114be33b-ec94-47c1-a83c-faebc0cf0ced", 00:16:42.869 "strip_size_kb": 0, 00:16:42.869 "state": "online", 00:16:42.869 "raid_level": "raid1", 00:16:42.869 "superblock": true, 00:16:42.869 "num_base_bdevs": 4, 00:16:42.869 "num_base_bdevs_discovered": 2, 00:16:42.869 "num_base_bdevs_operational": 2, 00:16:42.869 "base_bdevs_list": [ 00:16:42.869 { 00:16:42.869 "name": null, 00:16:42.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.869 "is_configured": false, 00:16:42.869 "data_offset": 0, 00:16:42.869 "data_size": 63488 00:16:42.869 }, 00:16:42.869 { 00:16:42.869 "name": null, 00:16:42.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.869 "is_configured": false, 00:16:42.869 "data_offset": 2048, 00:16:42.869 "data_size": 63488 00:16:42.869 }, 00:16:42.869 { 00:16:42.869 "name": "BaseBdev3", 00:16:42.869 "uuid": "b74396ac-5eea-5bd9-b075-45b433a3aff0", 00:16:42.869 "is_configured": true, 00:16:42.869 "data_offset": 2048, 00:16:42.869 "data_size": 63488 00:16:42.869 }, 00:16:42.869 { 00:16:42.870 "name": "BaseBdev4", 00:16:42.870 "uuid": "d54b5923-6305-5271-b7d1-48b442e31a1d", 00:16:42.870 "is_configured": true, 00:16:42.870 "data_offset": 2048, 00:16:42.870 "data_size": 63488 00:16:42.870 } 00:16:42.870 ] 00:16:42.870 }' 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78275 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78275 ']' 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78275 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.870 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78275 00:16:43.128 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.128 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.128 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78275' 00:16:43.128 killing process with pid 78275 00:16:43.128 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78275 00:16:43.128 Received shutdown signal, test time was about 60.000000 seconds 00:16:43.128 00:16:43.128 Latency(us) 00:16:43.128 [2024-11-20T08:50:14.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.128 [2024-11-20T08:50:14.044Z] =================================================================================================================== 00:16:43.128 [2024-11-20T08:50:14.044Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:43.128 [2024-11-20 08:50:13.799809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.128 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78275 00:16:43.128 [2024-11-20 08:50:13.800095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.128 [2024-11-20 08:50:13.800224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.128 [2024-11-20 08:50:13.800254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:43.387 [2024-11-20 08:50:14.216040] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.342 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:44.343 ************************************ 00:16:44.343 END TEST raid_rebuild_test_sb 00:16:44.343 ************************************ 00:16:44.343 00:16:44.343 real 0m29.009s 00:16:44.343 user 0m35.573s 00:16:44.343 sys 0m3.974s 00:16:44.343 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.343 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.343 08:50:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:44.343 08:50:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:44.343 08:50:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.343 08:50:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.601 ************************************ 00:16:44.601 START TEST raid_rebuild_test_io 00:16:44.601 ************************************ 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79068 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79068 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79068 ']' 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.601 08:50:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.601 [2024-11-20 08:50:15.390321] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:44.601 [2024-11-20 08:50:15.390807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79068 ] 00:16:44.601 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.601 Zero copy mechanism will not be used. 00:16:44.859 [2024-11-20 08:50:15.573671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.859 [2024-11-20 08:50:15.701748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.117 [2024-11-20 08:50:15.896741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.117 [2024-11-20 08:50:15.897033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 BaseBdev1_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 [2024-11-20 08:50:16.439112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.685 [2024-11-20 08:50:16.439259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.685 [2024-11-20 08:50:16.439294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.685 [2024-11-20 08:50:16.439312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.685 [2024-11-20 08:50:16.442083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.685 [2024-11-20 08:50:16.442315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.685 BaseBdev1 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 BaseBdev2_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 [2024-11-20 08:50:16.485269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.685 [2024-11-20 08:50:16.485347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.685 [2024-11-20 08:50:16.485375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.685 [2024-11-20 08:50:16.485394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.685 [2024-11-20 08:50:16.488353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.685 [2024-11-20 08:50:16.488536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.685 BaseBdev2 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 BaseBdev3_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.685 [2024-11-20 08:50:16.556832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.685 [2024-11-20 08:50:16.556904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.685 [2024-11-20 08:50:16.556936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.685 [2024-11-20 08:50:16.556954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.685 [2024-11-20 08:50:16.559777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.685 [2024-11-20 08:50:16.559829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.685 BaseBdev3 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.685 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 BaseBdev4_malloc 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 [2024-11-20 08:50:16.612578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:45.944 [2024-11-20 08:50:16.612646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.944 [2024-11-20 08:50:16.612673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:45.944 [2024-11-20 08:50:16.612689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.944 [2024-11-20 08:50:16.615457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.944 [2024-11-20 08:50:16.615507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.944 BaseBdev4 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 spare_malloc 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 spare_delay 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 [2024-11-20 08:50:16.672899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.944 [2024-11-20 08:50:16.672985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.944 [2024-11-20 08:50:16.673013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.944 [2024-11-20 08:50:16.673029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.944 [2024-11-20 08:50:16.675871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.944 [2024-11-20 08:50:16.676065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.944 spare 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.944 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.944 [2024-11-20 08:50:16.680955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.944 [2024-11-20 08:50:16.683519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.944 [2024-11-20 08:50:16.683773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.944 [2024-11-20 08:50:16.683904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.944 [2024-11-20 08:50:16.684058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.944 [2024-11-20 08:50:16.684237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:45.944 [2024-11-20 08:50:16.684601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.944 [2024-11-20 08:50:16.685007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.945 [2024-11-20 08:50:16.685134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.945 [2024-11-20 08:50:16.685546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.945 "name": "raid_bdev1", 00:16:45.945 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:45.945 "strip_size_kb": 0, 00:16:45.945 "state": "online", 00:16:45.945 "raid_level": "raid1", 00:16:45.945 "superblock": false, 00:16:45.945 "num_base_bdevs": 4, 00:16:45.945 "num_base_bdevs_discovered": 4, 00:16:45.945 "num_base_bdevs_operational": 4, 00:16:45.945 "base_bdevs_list": [ 00:16:45.945 { 00:16:45.945 "name": "BaseBdev1", 00:16:45.945 "uuid": "d53d90a3-930d-5cbd-95b7-ba150e056a30", 00:16:45.945 "is_configured": true, 00:16:45.945 "data_offset": 0, 00:16:45.945 "data_size": 65536 00:16:45.945 }, 00:16:45.945 { 00:16:45.945 "name": "BaseBdev2", 00:16:45.945 "uuid": "28ae1054-dd0c-57ac-99f5-3807e9718efc", 00:16:45.945 "is_configured": true, 00:16:45.945 "data_offset": 0, 00:16:45.945 "data_size": 65536 00:16:45.945 }, 00:16:45.945 { 00:16:45.945 "name": "BaseBdev3", 00:16:45.945 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:45.945 "is_configured": true, 00:16:45.945 "data_offset": 0, 00:16:45.945 "data_size": 65536 00:16:45.945 }, 00:16:45.945 { 00:16:45.945 "name": "BaseBdev4", 00:16:45.945 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:45.945 "is_configured": true, 00:16:45.945 "data_offset": 0, 00:16:45.945 "data_size": 65536 00:16:45.945 } 00:16:45.945 ] 00:16:45.945 }' 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.945 08:50:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.510 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 [2024-11-20 08:50:17.198051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 [2024-11-20 08:50:17.297617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.511 "name": "raid_bdev1", 00:16:46.511 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:46.511 "strip_size_kb": 0, 00:16:46.511 "state": "online", 00:16:46.511 "raid_level": "raid1", 00:16:46.511 "superblock": false, 00:16:46.511 "num_base_bdevs": 4, 00:16:46.511 "num_base_bdevs_discovered": 3, 00:16:46.511 "num_base_bdevs_operational": 3, 00:16:46.511 "base_bdevs_list": [ 00:16:46.511 { 00:16:46.511 "name": null, 00:16:46.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.511 "is_configured": false, 00:16:46.511 "data_offset": 0, 00:16:46.511 "data_size": 65536 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev2", 00:16:46.511 "uuid": "28ae1054-dd0c-57ac-99f5-3807e9718efc", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 0, 00:16:46.511 "data_size": 65536 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev3", 00:16:46.511 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 0, 00:16:46.511 "data_size": 65536 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev4", 00:16:46.511 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 0, 00:16:46.511 "data_size": 65536 00:16:46.511 } 00:16:46.511 ] 00:16:46.511 }' 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.511 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.769 [2024-11-20 08:50:17.425746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:46.769 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:46.769 Zero copy mechanism will not be used. 00:16:46.769 Running I/O for 60 seconds... 00:16:47.027 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.027 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.027 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.027 [2024-11-20 08:50:17.841223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.027 08:50:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.027 08:50:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:47.027 [2024-11-20 08:50:17.890821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:47.027 [2024-11-20 08:50:17.893541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.286 [2024-11-20 08:50:18.022758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:47.286 [2024-11-20 08:50:18.023460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:47.544 [2024-11-20 08:50:18.243228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:47.544 [2024-11-20 08:50:18.243618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:47.802 150.00 IOPS, 450.00 MiB/s [2024-11-20T08:50:18.718Z] [2024-11-20 08:50:18.591057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:48.061 [2024-11-20 08:50:18.742476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.061 "name": "raid_bdev1", 00:16:48.061 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:48.061 "strip_size_kb": 0, 00:16:48.061 "state": "online", 00:16:48.061 "raid_level": "raid1", 00:16:48.061 "superblock": false, 00:16:48.061 "num_base_bdevs": 4, 00:16:48.061 "num_base_bdevs_discovered": 4, 00:16:48.061 "num_base_bdevs_operational": 4, 00:16:48.061 "process": { 00:16:48.061 "type": "rebuild", 00:16:48.061 "target": "spare", 00:16:48.061 "progress": { 00:16:48.061 "blocks": 12288, 00:16:48.061 "percent": 18 00:16:48.061 } 00:16:48.061 }, 00:16:48.061 "base_bdevs_list": [ 00:16:48.061 { 00:16:48.061 "name": "spare", 00:16:48.061 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:48.061 "is_configured": true, 00:16:48.061 "data_offset": 0, 00:16:48.061 "data_size": 65536 00:16:48.061 }, 00:16:48.061 { 00:16:48.061 "name": "BaseBdev2", 00:16:48.061 "uuid": "28ae1054-dd0c-57ac-99f5-3807e9718efc", 00:16:48.061 "is_configured": true, 00:16:48.061 "data_offset": 0, 00:16:48.061 "data_size": 65536 00:16:48.061 }, 00:16:48.061 { 00:16:48.061 "name": "BaseBdev3", 00:16:48.061 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:48.061 "is_configured": true, 00:16:48.061 "data_offset": 0, 00:16:48.061 "data_size": 65536 00:16:48.061 }, 00:16:48.061 { 00:16:48.061 "name": "BaseBdev4", 00:16:48.061 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:48.061 "is_configured": true, 00:16:48.061 "data_offset": 0, 00:16:48.061 "data_size": 65536 00:16:48.061 } 00:16:48.061 ] 00:16:48.061 }' 00:16:48.061 08:50:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.319 [2024-11-20 08:50:18.986356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.319 [2024-11-20 08:50:19.070263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.319 [2024-11-20 08:50:19.119038] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.319 [2024-11-20 08:50:19.130398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.319 [2024-11-20 08:50:19.130462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.319 [2024-11-20 08:50:19.130494] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.319 [2024-11-20 08:50:19.153892] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.319 "name": "raid_bdev1", 00:16:48.319 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:48.319 "strip_size_kb": 0, 00:16:48.319 "state": "online", 00:16:48.319 "raid_level": "raid1", 00:16:48.319 "superblock": false, 00:16:48.319 "num_base_bdevs": 4, 00:16:48.319 "num_base_bdevs_discovered": 3, 00:16:48.319 "num_base_bdevs_operational": 3, 00:16:48.319 "base_bdevs_list": [ 00:16:48.319 { 00:16:48.319 "name": null, 00:16:48.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.319 "is_configured": false, 00:16:48.319 "data_offset": 0, 00:16:48.319 "data_size": 65536 00:16:48.319 }, 00:16:48.319 { 00:16:48.319 "name": "BaseBdev2", 00:16:48.319 "uuid": "28ae1054-dd0c-57ac-99f5-3807e9718efc", 00:16:48.319 "is_configured": true, 00:16:48.319 "data_offset": 0, 00:16:48.319 "data_size": 65536 00:16:48.319 }, 00:16:48.319 { 00:16:48.319 "name": "BaseBdev3", 00:16:48.319 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:48.319 "is_configured": true, 00:16:48.319 "data_offset": 0, 00:16:48.319 "data_size": 65536 00:16:48.319 }, 00:16:48.319 { 00:16:48.319 "name": "BaseBdev4", 00:16:48.319 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:48.319 "is_configured": true, 00:16:48.319 "data_offset": 0, 00:16:48.319 "data_size": 65536 00:16:48.319 } 00:16:48.319 ] 00:16:48.319 }' 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.319 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.835 142.00 IOPS, 426.00 MiB/s [2024-11-20T08:50:19.751Z] 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.835 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.835 "name": "raid_bdev1", 00:16:48.835 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:48.835 "strip_size_kb": 0, 00:16:48.835 "state": "online", 00:16:48.835 "raid_level": "raid1", 00:16:48.835 "superblock": false, 00:16:48.835 "num_base_bdevs": 4, 00:16:48.835 "num_base_bdevs_discovered": 3, 00:16:48.835 "num_base_bdevs_operational": 3, 00:16:48.835 "base_bdevs_list": [ 00:16:48.835 { 00:16:48.835 "name": null, 00:16:48.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.835 "is_configured": false, 00:16:48.835 "data_offset": 0, 00:16:48.835 "data_size": 65536 00:16:48.835 }, 00:16:48.835 { 00:16:48.835 "name": "BaseBdev2", 00:16:48.835 "uuid": "28ae1054-dd0c-57ac-99f5-3807e9718efc", 00:16:48.835 "is_configured": true, 00:16:48.835 "data_offset": 0, 00:16:48.835 "data_size": 65536 00:16:48.835 }, 00:16:48.835 { 00:16:48.835 "name": "BaseBdev3", 00:16:48.835 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:48.835 "is_configured": true, 00:16:48.835 "data_offset": 0, 00:16:48.836 "data_size": 65536 00:16:48.836 }, 00:16:48.836 { 00:16:48.836 "name": "BaseBdev4", 00:16:48.836 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:48.836 "is_configured": true, 00:16:48.836 "data_offset": 0, 00:16:48.836 "data_size": 65536 00:16:48.836 } 00:16:48.836 ] 00:16:48.836 }' 00:16:48.836 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.094 [2024-11-20 08:50:19.843275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.094 08:50:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:49.094 [2024-11-20 08:50:19.917118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:49.094 [2024-11-20 08:50:19.919901] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.351 [2024-11-20 08:50:20.060470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:49.351 [2024-11-20 08:50:20.182570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.351 [2024-11-20 08:50:20.183168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:49.867 145.33 IOPS, 436.00 MiB/s [2024-11-20T08:50:20.783Z] [2024-11-20 08:50:20.527295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:49.867 [2024-11-20 08:50:20.527929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:49.867 [2024-11-20 08:50:20.738654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:49.867 [2024-11-20 08:50:20.739133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.126 "name": "raid_bdev1", 00:16:50.126 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:50.126 "strip_size_kb": 0, 00:16:50.126 "state": "online", 00:16:50.126 "raid_level": "raid1", 00:16:50.126 "superblock": false, 00:16:50.126 "num_base_bdevs": 4, 00:16:50.126 "num_base_bdevs_discovered": 4, 00:16:50.126 "num_base_bdevs_operational": 4, 00:16:50.126 "process": { 00:16:50.126 "type": "rebuild", 00:16:50.126 "target": "spare", 00:16:50.126 "progress": { 00:16:50.126 "blocks": 10240, 00:16:50.126 "percent": 15 00:16:50.126 } 00:16:50.126 }, 00:16:50.126 "base_bdevs_list": [ 00:16:50.126 { 00:16:50.126 "name": "spare", 00:16:50.126 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:50.126 "is_configured": true, 00:16:50.126 "data_offset": 0, 00:16:50.126 "data_size": 65536 00:16:50.126 }, 00:16:50.126 { 00:16:50.126 "name": "BaseBdev2", 00:16:50.126 "uuid": "28ae1054-dd0c-57ac-99f5-3807e9718efc", 00:16:50.126 "is_configured": true, 00:16:50.126 "data_offset": 0, 00:16:50.126 "data_size": 65536 00:16:50.126 }, 00:16:50.126 { 00:16:50.126 "name": "BaseBdev3", 00:16:50.126 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:50.126 "is_configured": true, 00:16:50.126 "data_offset": 0, 00:16:50.126 "data_size": 65536 00:16:50.126 }, 00:16:50.126 { 00:16:50.126 "name": "BaseBdev4", 00:16:50.126 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:50.126 "is_configured": true, 00:16:50.126 "data_offset": 0, 00:16:50.126 "data_size": 65536 00:16:50.126 } 00:16:50.126 ] 00:16:50.126 }' 00:16:50.126 08:50:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.126 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.126 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.384 [2024-11-20 08:50:21.063656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:50.384 [2024-11-20 08:50:21.127169] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:50.384 [2024-11-20 08:50:21.127250] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.384 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.385 "name": "raid_bdev1", 00:16:50.385 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:50.385 "strip_size_kb": 0, 00:16:50.385 "state": "online", 00:16:50.385 "raid_level": "raid1", 00:16:50.385 "superblock": false, 00:16:50.385 "num_base_bdevs": 4, 00:16:50.385 "num_base_bdevs_discovered": 3, 00:16:50.385 "num_base_bdevs_operational": 3, 00:16:50.385 "process": { 00:16:50.385 "type": "rebuild", 00:16:50.385 "target": "spare", 00:16:50.385 "progress": { 00:16:50.385 "blocks": 14336, 00:16:50.385 "percent": 21 00:16:50.385 } 00:16:50.385 }, 00:16:50.385 "base_bdevs_list": [ 00:16:50.385 { 00:16:50.385 "name": "spare", 00:16:50.385 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:50.385 "is_configured": true, 00:16:50.385 "data_offset": 0, 00:16:50.385 "data_size": 65536 00:16:50.385 }, 00:16:50.385 { 00:16:50.385 "name": null, 00:16:50.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.385 "is_configured": false, 00:16:50.385 "data_offset": 0, 00:16:50.385 "data_size": 65536 00:16:50.385 }, 00:16:50.385 { 00:16:50.385 "name": "BaseBdev3", 00:16:50.385 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:50.385 "is_configured": true, 00:16:50.385 "data_offset": 0, 00:16:50.385 "data_size": 65536 00:16:50.385 }, 00:16:50.385 { 00:16:50.385 "name": "BaseBdev4", 00:16:50.385 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:50.385 "is_configured": true, 00:16:50.385 "data_offset": 0, 00:16:50.385 "data_size": 65536 00:16:50.385 } 00:16:50.385 ] 00:16:50.385 }' 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.385 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=522 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.643 "name": "raid_bdev1", 00:16:50.643 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:50.643 "strip_size_kb": 0, 00:16:50.643 "state": "online", 00:16:50.643 "raid_level": "raid1", 00:16:50.643 "superblock": false, 00:16:50.643 "num_base_bdevs": 4, 00:16:50.643 "num_base_bdevs_discovered": 3, 00:16:50.643 "num_base_bdevs_operational": 3, 00:16:50.643 "process": { 00:16:50.643 "type": "rebuild", 00:16:50.643 "target": "spare", 00:16:50.643 "progress": { 00:16:50.643 "blocks": 16384, 00:16:50.643 "percent": 25 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 "base_bdevs_list": [ 00:16:50.643 { 00:16:50.643 "name": "spare", 00:16:50.643 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:50.643 "is_configured": true, 00:16:50.643 "data_offset": 0, 00:16:50.643 "data_size": 65536 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "name": null, 00:16:50.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.643 "is_configured": false, 00:16:50.643 "data_offset": 0, 00:16:50.643 "data_size": 65536 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "name": "BaseBdev3", 00:16:50.643 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:50.643 "is_configured": true, 00:16:50.643 "data_offset": 0, 00:16:50.643 "data_size": 65536 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "name": "BaseBdev4", 00:16:50.643 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:50.643 "is_configured": true, 00:16:50.643 "data_offset": 0, 00:16:50.643 "data_size": 65536 00:16:50.643 } 00:16:50.643 ] 00:16:50.643 }' 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.643 123.25 IOPS, 369.75 MiB/s [2024-11-20T08:50:21.559Z] 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.643 08:50:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.643 [2024-11-20 08:50:21.515719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:50.901 [2024-11-20 08:50:21.645609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:50.901 [2024-11-20 08:50:21.646247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:51.159 [2024-11-20 08:50:21.979069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:51.417 [2024-11-20 08:50:22.216018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:51.676 [2024-11-20 08:50:22.428672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:51.676 107.40 IOPS, 322.20 MiB/s [2024-11-20T08:50:22.592Z] 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.676 "name": "raid_bdev1", 00:16:51.676 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:51.676 "strip_size_kb": 0, 00:16:51.676 "state": "online", 00:16:51.676 "raid_level": "raid1", 00:16:51.676 "superblock": false, 00:16:51.676 "num_base_bdevs": 4, 00:16:51.676 "num_base_bdevs_discovered": 3, 00:16:51.676 "num_base_bdevs_operational": 3, 00:16:51.676 "process": { 00:16:51.676 "type": "rebuild", 00:16:51.676 "target": "spare", 00:16:51.676 "progress": { 00:16:51.676 "blocks": 32768, 00:16:51.676 "percent": 50 00:16:51.676 } 00:16:51.676 }, 00:16:51.676 "base_bdevs_list": [ 00:16:51.676 { 00:16:51.676 "name": "spare", 00:16:51.676 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:51.676 "is_configured": true, 00:16:51.676 "data_offset": 0, 00:16:51.676 "data_size": 65536 00:16:51.676 }, 00:16:51.676 { 00:16:51.676 "name": null, 00:16:51.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.676 "is_configured": false, 00:16:51.676 "data_offset": 0, 00:16:51.676 "data_size": 65536 00:16:51.676 }, 00:16:51.676 { 00:16:51.676 "name": "BaseBdev3", 00:16:51.676 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:51.676 "is_configured": true, 00:16:51.676 "data_offset": 0, 00:16:51.676 "data_size": 65536 00:16:51.676 }, 00:16:51.676 { 00:16:51.676 "name": "BaseBdev4", 00:16:51.676 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:51.676 "is_configured": true, 00:16:51.676 "data_offset": 0, 00:16:51.676 "data_size": 65536 00:16:51.676 } 00:16:51.676 ] 00:16:51.676 }' 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.676 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.935 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.935 08:50:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.935 [2024-11-20 08:50:22.632257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:51.935 [2024-11-20 08:50:22.632711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:52.193 [2024-11-20 08:50:22.980698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:52.450 [2024-11-20 08:50:23.201964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:52.708 94.83 IOPS, 284.50 MiB/s [2024-11-20T08:50:23.624Z] [2024-11-20 08:50:23.578751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:52.708 [2024-11-20 08:50:23.579536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.966 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.966 "name": "raid_bdev1", 00:16:52.966 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:52.966 "strip_size_kb": 0, 00:16:52.966 "state": "online", 00:16:52.966 "raid_level": "raid1", 00:16:52.966 "superblock": false, 00:16:52.966 "num_base_bdevs": 4, 00:16:52.966 "num_base_bdevs_discovered": 3, 00:16:52.966 "num_base_bdevs_operational": 3, 00:16:52.966 "process": { 00:16:52.966 "type": "rebuild", 00:16:52.966 "target": "spare", 00:16:52.966 "progress": { 00:16:52.966 "blocks": 47104, 00:16:52.966 "percent": 71 00:16:52.966 } 00:16:52.966 }, 00:16:52.966 "base_bdevs_list": [ 00:16:52.966 { 00:16:52.966 "name": "spare", 00:16:52.966 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:52.966 "is_configured": true, 00:16:52.966 "data_offset": 0, 00:16:52.967 "data_size": 65536 00:16:52.967 }, 00:16:52.967 { 00:16:52.967 "name": null, 00:16:52.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.967 "is_configured": false, 00:16:52.967 "data_offset": 0, 00:16:52.967 "data_size": 65536 00:16:52.967 }, 00:16:52.967 { 00:16:52.967 "name": "BaseBdev3", 00:16:52.967 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:52.967 "is_configured": true, 00:16:52.967 "data_offset": 0, 00:16:52.967 "data_size": 65536 00:16:52.967 }, 00:16:52.967 { 00:16:52.967 "name": "BaseBdev4", 00:16:52.967 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:52.967 "is_configured": true, 00:16:52.967 "data_offset": 0, 00:16:52.967 "data_size": 65536 00:16:52.967 } 00:16:52.967 ] 00:16:52.967 }' 00:16:52.967 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.967 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.967 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.967 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.967 08:50:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.224 [2024-11-20 08:50:24.030630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:53.482 [2024-11-20 08:50:24.371417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:53.999 85.29 IOPS, 255.86 MiB/s [2024-11-20T08:50:24.915Z] 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.999 [2024-11-20 08:50:24.823680] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:53.999 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.999 "name": "raid_bdev1", 00:16:53.999 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:53.999 "strip_size_kb": 0, 00:16:53.999 "state": "online", 00:16:53.999 "raid_level": "raid1", 00:16:53.999 "superblock": false, 00:16:53.999 "num_base_bdevs": 4, 00:16:53.999 "num_base_bdevs_discovered": 3, 00:16:53.999 "num_base_bdevs_operational": 3, 00:16:53.999 "process": { 00:16:53.999 "type": "rebuild", 00:16:53.999 "target": "spare", 00:16:53.999 "progress": { 00:16:53.999 "blocks": 63488, 00:16:53.999 "percent": 96 00:16:53.999 } 00:16:53.999 }, 00:16:53.999 "base_bdevs_list": [ 00:16:53.999 { 00:16:53.999 "name": "spare", 00:16:54.000 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:54.000 "is_configured": true, 00:16:54.000 "data_offset": 0, 00:16:54.000 "data_size": 65536 00:16:54.000 }, 00:16:54.000 { 00:16:54.000 "name": null, 00:16:54.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.000 "is_configured": false, 00:16:54.000 "data_offset": 0, 00:16:54.000 "data_size": 65536 00:16:54.000 }, 00:16:54.000 { 00:16:54.000 "name": "BaseBdev3", 00:16:54.000 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:54.000 "is_configured": true, 00:16:54.000 "data_offset": 0, 00:16:54.000 "data_size": 65536 00:16:54.000 }, 00:16:54.000 { 00:16:54.000 "name": "BaseBdev4", 00:16:54.000 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:54.000 "is_configured": true, 00:16:54.000 "data_offset": 0, 00:16:54.000 "data_size": 65536 00:16:54.000 } 00:16:54.000 ] 00:16:54.000 }' 00:16:54.000 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.000 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.000 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.258 [2024-11-20 08:50:24.923681] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:54.258 [2024-11-20 08:50:24.935143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.258 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.258 08:50:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.083 79.62 IOPS, 238.88 MiB/s [2024-11-20T08:50:25.999Z] 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.083 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.083 "name": "raid_bdev1", 00:16:55.083 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:55.083 "strip_size_kb": 0, 00:16:55.083 "state": "online", 00:16:55.083 "raid_level": "raid1", 00:16:55.083 "superblock": false, 00:16:55.083 "num_base_bdevs": 4, 00:16:55.083 "num_base_bdevs_discovered": 3, 00:16:55.083 "num_base_bdevs_operational": 3, 00:16:55.083 "base_bdevs_list": [ 00:16:55.083 { 00:16:55.083 "name": "spare", 00:16:55.083 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:55.084 "is_configured": true, 00:16:55.084 "data_offset": 0, 00:16:55.084 "data_size": 65536 00:16:55.084 }, 00:16:55.084 { 00:16:55.084 "name": null, 00:16:55.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.084 "is_configured": false, 00:16:55.084 "data_offset": 0, 00:16:55.084 "data_size": 65536 00:16:55.084 }, 00:16:55.084 { 00:16:55.084 "name": "BaseBdev3", 00:16:55.084 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:55.084 "is_configured": true, 00:16:55.084 "data_offset": 0, 00:16:55.084 "data_size": 65536 00:16:55.084 }, 00:16:55.084 { 00:16:55.084 "name": "BaseBdev4", 00:16:55.084 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:55.084 "is_configured": true, 00:16:55.084 "data_offset": 0, 00:16:55.084 "data_size": 65536 00:16:55.084 } 00:16:55.084 ] 00:16:55.084 }' 00:16:55.084 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.342 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.342 "name": "raid_bdev1", 00:16:55.343 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:55.343 "strip_size_kb": 0, 00:16:55.343 "state": "online", 00:16:55.343 "raid_level": "raid1", 00:16:55.343 "superblock": false, 00:16:55.343 "num_base_bdevs": 4, 00:16:55.343 "num_base_bdevs_discovered": 3, 00:16:55.343 "num_base_bdevs_operational": 3, 00:16:55.343 "base_bdevs_list": [ 00:16:55.343 { 00:16:55.343 "name": "spare", 00:16:55.343 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:55.343 "is_configured": true, 00:16:55.343 "data_offset": 0, 00:16:55.343 "data_size": 65536 00:16:55.343 }, 00:16:55.343 { 00:16:55.343 "name": null, 00:16:55.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.343 "is_configured": false, 00:16:55.343 "data_offset": 0, 00:16:55.343 "data_size": 65536 00:16:55.343 }, 00:16:55.343 { 00:16:55.343 "name": "BaseBdev3", 00:16:55.343 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:55.343 "is_configured": true, 00:16:55.343 "data_offset": 0, 00:16:55.343 "data_size": 65536 00:16:55.343 }, 00:16:55.343 { 00:16:55.343 "name": "BaseBdev4", 00:16:55.343 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:55.343 "is_configured": true, 00:16:55.343 "data_offset": 0, 00:16:55.343 "data_size": 65536 00:16:55.343 } 00:16:55.343 ] 00:16:55.343 }' 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.343 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.601 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.601 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.601 "name": "raid_bdev1", 00:16:55.601 "uuid": "374112a3-4b79-4276-acf4-8e78475d4f1a", 00:16:55.601 "strip_size_kb": 0, 00:16:55.601 "state": "online", 00:16:55.601 "raid_level": "raid1", 00:16:55.601 "superblock": false, 00:16:55.601 "num_base_bdevs": 4, 00:16:55.601 "num_base_bdevs_discovered": 3, 00:16:55.601 "num_base_bdevs_operational": 3, 00:16:55.601 "base_bdevs_list": [ 00:16:55.601 { 00:16:55.601 "name": "spare", 00:16:55.601 "uuid": "f27bc2ad-4d69-5876-bb84-4e77b5e5aabe", 00:16:55.601 "is_configured": true, 00:16:55.601 "data_offset": 0, 00:16:55.601 "data_size": 65536 00:16:55.601 }, 00:16:55.601 { 00:16:55.601 "name": null, 00:16:55.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.601 "is_configured": false, 00:16:55.601 "data_offset": 0, 00:16:55.601 "data_size": 65536 00:16:55.601 }, 00:16:55.601 { 00:16:55.601 "name": "BaseBdev3", 00:16:55.601 "uuid": "575b9a92-4a1b-5c2b-a696-59786691206e", 00:16:55.601 "is_configured": true, 00:16:55.601 "data_offset": 0, 00:16:55.601 "data_size": 65536 00:16:55.601 }, 00:16:55.601 { 00:16:55.601 "name": "BaseBdev4", 00:16:55.601 "uuid": "af7bab7c-b49a-5d69-a437-1a7b9335be42", 00:16:55.601 "is_configured": true, 00:16:55.601 "data_offset": 0, 00:16:55.601 "data_size": 65536 00:16:55.601 } 00:16:55.601 ] 00:16:55.601 }' 00:16:55.601 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.601 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.863 73.89 IOPS, 221.67 MiB/s [2024-11-20T08:50:26.779Z] 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:55.863 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.863 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.863 [2024-11-20 08:50:26.778383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.121 [2024-11-20 08:50:26.778547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.121 00:16:56.121 Latency(us) 00:16:56.121 [2024-11-20T08:50:27.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.121 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:56.121 raid_bdev1 : 9.39 72.18 216.53 0.00 0.00 18908.90 283.00 118203.11 00:16:56.121 [2024-11-20T08:50:27.037Z] =================================================================================================================== 00:16:56.121 [2024-11-20T08:50:27.037Z] Total : 72.18 216.53 0.00 0.00 18908.90 283.00 118203.11 00:16:56.121 { 00:16:56.121 "results": [ 00:16:56.121 { 00:16:56.121 "job": "raid_bdev1", 00:16:56.121 "core_mask": "0x1", 00:16:56.121 "workload": "randrw", 00:16:56.121 "percentage": 50, 00:16:56.121 "status": "finished", 00:16:56.121 "queue_depth": 2, 00:16:56.121 "io_size": 3145728, 00:16:56.121 "runtime": 9.393688, 00:16:56.121 "iops": 72.17612507462458, 00:16:56.121 "mibps": 216.52837522387375, 00:16:56.121 "io_failed": 0, 00:16:56.121 "io_timeout": 0, 00:16:56.121 "avg_latency_us": 18908.902805041565, 00:16:56.121 "min_latency_us": 282.99636363636364, 00:16:56.121 "max_latency_us": 118203.11272727273 00:16:56.121 } 00:16:56.121 ], 00:16:56.121 "core_count": 1 00:16:56.121 } 00:16:56.121 [2024-11-20 08:50:26.842388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.121 [2024-11-20 08:50:26.842451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.121 [2024-11-20 08:50:26.842590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.121 [2024-11-20 08:50:26.842614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.121 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:56.379 /dev/nbd0 00:16:56.379 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:56.379 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.380 1+0 records in 00:16:56.380 1+0 records out 00:16:56.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284425 s, 14.4 MB/s 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.380 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:56.946 /dev/nbd1 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:56.946 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.947 1+0 records in 00:16:56.947 1+0 records out 00:16:56.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359092 s, 11.4 MB/s 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.947 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:57.204 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.204 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.204 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.204 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.204 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.204 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.462 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:57.720 /dev/nbd1 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.721 1+0 records in 00:16:57.721 1+0 records out 00:16:57.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409273 s, 10.0 MB/s 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.721 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.979 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79068 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79068 ']' 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79068 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79068 00:16:58.546 killing process with pid 79068 00:16:58.546 Received shutdown signal, test time was about 11.797386 seconds 00:16:58.546 00:16:58.546 Latency(us) 00:16:58.546 [2024-11-20T08:50:29.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.546 [2024-11-20T08:50:29.462Z] =================================================================================================================== 00:16:58.546 [2024-11-20T08:50:29.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79068' 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79068 00:16:58.546 [2024-11-20 08:50:29.225952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.546 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79068 00:16:58.803 [2024-11-20 08:50:29.586638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.736 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:59.736 00:16:59.736 real 0m15.374s 00:16:59.736 user 0m20.133s 00:16:59.736 sys 0m1.861s 00:16:59.736 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.736 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.736 ************************************ 00:16:59.736 END TEST raid_rebuild_test_io 00:16:59.736 ************************************ 00:16:59.995 08:50:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:59.995 08:50:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:59.995 08:50:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.995 08:50:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 ************************************ 00:16:59.995 START TEST raid_rebuild_test_sb_io 00:16:59.995 ************************************ 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79503 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79503 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79503 ']' 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.995 08:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 [2024-11-20 08:50:30.810071] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:16:59.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:59.995 Zero copy mechanism will not be used. 00:16:59.995 [2024-11-20 08:50:30.810555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79503 ] 00:17:00.254 [2024-11-20 08:50:30.992071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.254 [2024-11-20 08:50:31.112736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.511 [2024-11-20 08:50:31.369129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.511 [2024-11-20 08:50:31.369185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.075 BaseBdev1_malloc 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.075 [2024-11-20 08:50:31.797883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:01.075 [2024-11-20 08:50:31.797980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.075 [2024-11-20 08:50:31.798009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.075 [2024-11-20 08:50:31.798027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.075 [2024-11-20 08:50:31.800747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.075 [2024-11-20 08:50:31.800799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.075 BaseBdev1 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.075 BaseBdev2_malloc 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.075 [2024-11-20 08:50:31.844039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:01.075 [2024-11-20 08:50:31.844109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.075 [2024-11-20 08:50:31.844136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.075 [2024-11-20 08:50:31.844191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.075 [2024-11-20 08:50:31.846841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.075 [2024-11-20 08:50:31.847028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.075 BaseBdev2 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.075 BaseBdev3_malloc 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:01.075 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.076 [2024-11-20 08:50:31.910859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:01.076 [2024-11-20 08:50:31.910957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.076 [2024-11-20 08:50:31.910987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.076 [2024-11-20 08:50:31.911005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.076 [2024-11-20 08:50:31.913703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.076 [2024-11-20 08:50:31.913884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:01.076 BaseBdev3 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.076 BaseBdev4_malloc 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.076 [2024-11-20 08:50:31.957505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:01.076 [2024-11-20 08:50:31.957707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.076 [2024-11-20 08:50:31.957746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:01.076 [2024-11-20 08:50:31.957766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.076 [2024-11-20 08:50:31.960472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.076 [2024-11-20 08:50:31.960525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:01.076 BaseBdev4 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.076 08:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.334 spare_malloc 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.334 spare_delay 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.334 [2024-11-20 08:50:32.017506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:01.334 [2024-11-20 08:50:32.017577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.334 [2024-11-20 08:50:32.017609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:01.334 [2024-11-20 08:50:32.017627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.334 [2024-11-20 08:50:32.020480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.334 [2024-11-20 08:50:32.020656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:01.334 spare 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.334 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.334 [2024-11-20 08:50:32.025608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.334 [2024-11-20 08:50:32.028091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.334 [2024-11-20 08:50:32.028176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.334 [2024-11-20 08:50:32.028298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:01.334 [2024-11-20 08:50:32.028537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:01.334 [2024-11-20 08:50:32.028579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:01.335 [2024-11-20 08:50:32.028901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:01.335 [2024-11-20 08:50:32.029114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:01.335 [2024-11-20 08:50:32.029130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:01.335 [2024-11-20 08:50:32.029349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.335 "name": "raid_bdev1", 00:17:01.335 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:01.335 "strip_size_kb": 0, 00:17:01.335 "state": "online", 00:17:01.335 "raid_level": "raid1", 00:17:01.335 "superblock": true, 00:17:01.335 "num_base_bdevs": 4, 00:17:01.335 "num_base_bdevs_discovered": 4, 00:17:01.335 "num_base_bdevs_operational": 4, 00:17:01.335 "base_bdevs_list": [ 00:17:01.335 { 00:17:01.335 "name": "BaseBdev1", 00:17:01.335 "uuid": "91bc9d08-d3c7-5e39-9585-a2c0737a89d3", 00:17:01.335 "is_configured": true, 00:17:01.335 "data_offset": 2048, 00:17:01.335 "data_size": 63488 00:17:01.335 }, 00:17:01.335 { 00:17:01.335 "name": "BaseBdev2", 00:17:01.335 "uuid": "fb8fbdb1-b953-5d30-99ae-85371a3d74a6", 00:17:01.335 "is_configured": true, 00:17:01.335 "data_offset": 2048, 00:17:01.335 "data_size": 63488 00:17:01.335 }, 00:17:01.335 { 00:17:01.335 "name": "BaseBdev3", 00:17:01.335 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:01.335 "is_configured": true, 00:17:01.335 "data_offset": 2048, 00:17:01.335 "data_size": 63488 00:17:01.335 }, 00:17:01.335 { 00:17:01.335 "name": "BaseBdev4", 00:17:01.335 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:01.335 "is_configured": true, 00:17:01.335 "data_offset": 2048, 00:17:01.335 "data_size": 63488 00:17:01.335 } 00:17:01.335 ] 00:17:01.335 }' 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.335 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:01.902 [2024-11-20 08:50:32.534134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.902 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 [2024-11-20 08:50:32.641672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.903 "name": "raid_bdev1", 00:17:01.903 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:01.903 "strip_size_kb": 0, 00:17:01.903 "state": "online", 00:17:01.903 "raid_level": "raid1", 00:17:01.903 "superblock": true, 00:17:01.903 "num_base_bdevs": 4, 00:17:01.903 "num_base_bdevs_discovered": 3, 00:17:01.903 "num_base_bdevs_operational": 3, 00:17:01.903 "base_bdevs_list": [ 00:17:01.903 { 00:17:01.903 "name": null, 00:17:01.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.903 "is_configured": false, 00:17:01.903 "data_offset": 0, 00:17:01.903 "data_size": 63488 00:17:01.903 }, 00:17:01.903 { 00:17:01.903 "name": "BaseBdev2", 00:17:01.903 "uuid": "fb8fbdb1-b953-5d30-99ae-85371a3d74a6", 00:17:01.903 "is_configured": true, 00:17:01.903 "data_offset": 2048, 00:17:01.903 "data_size": 63488 00:17:01.903 }, 00:17:01.903 { 00:17:01.903 "name": "BaseBdev3", 00:17:01.903 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:01.903 "is_configured": true, 00:17:01.903 "data_offset": 2048, 00:17:01.903 "data_size": 63488 00:17:01.903 }, 00:17:01.903 { 00:17:01.903 "name": "BaseBdev4", 00:17:01.903 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:01.903 "is_configured": true, 00:17:01.903 "data_offset": 2048, 00:17:01.903 "data_size": 63488 00:17:01.903 } 00:17:01.903 ] 00:17:01.903 }' 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.903 08:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 [2024-11-20 08:50:32.765927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:01.903 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:01.903 Zero copy mechanism will not be used. 00:17:01.903 Running I/O for 60 seconds... 00:17:02.469 08:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.469 08:50:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.469 08:50:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 [2024-11-20 08:50:33.178460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.469 08:50:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.469 08:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:02.469 [2024-11-20 08:50:33.250910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:02.469 [2024-11-20 08:50:33.253468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.469 [2024-11-20 08:50:33.379373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:02.469 [2024-11-20 08:50:33.381030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:03.034 [2024-11-20 08:50:33.642323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:03.034 [2024-11-20 08:50:33.643332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:03.291 129.00 IOPS, 387.00 MiB/s [2024-11-20T08:50:34.207Z] [2024-11-20 08:50:34.007391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:03.291 [2024-11-20 08:50:34.008321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:03.549 [2024-11-20 08:50:34.223241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:03.549 [2024-11-20 08:50:34.224382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.549 "name": "raid_bdev1", 00:17:03.549 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:03.549 "strip_size_kb": 0, 00:17:03.549 "state": "online", 00:17:03.549 "raid_level": "raid1", 00:17:03.549 "superblock": true, 00:17:03.549 "num_base_bdevs": 4, 00:17:03.549 "num_base_bdevs_discovered": 4, 00:17:03.549 "num_base_bdevs_operational": 4, 00:17:03.549 "process": { 00:17:03.549 "type": "rebuild", 00:17:03.549 "target": "spare", 00:17:03.549 "progress": { 00:17:03.549 "blocks": 10240, 00:17:03.549 "percent": 16 00:17:03.549 } 00:17:03.549 }, 00:17:03.549 "base_bdevs_list": [ 00:17:03.549 { 00:17:03.549 "name": "spare", 00:17:03.549 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:03.549 "is_configured": true, 00:17:03.549 "data_offset": 2048, 00:17:03.549 "data_size": 63488 00:17:03.549 }, 00:17:03.549 { 00:17:03.549 "name": "BaseBdev2", 00:17:03.549 "uuid": "fb8fbdb1-b953-5d30-99ae-85371a3d74a6", 00:17:03.549 "is_configured": true, 00:17:03.549 "data_offset": 2048, 00:17:03.549 "data_size": 63488 00:17:03.549 }, 00:17:03.549 { 00:17:03.549 "name": "BaseBdev3", 00:17:03.549 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:03.549 "is_configured": true, 00:17:03.549 "data_offset": 2048, 00:17:03.549 "data_size": 63488 00:17:03.549 }, 00:17:03.549 { 00:17:03.549 "name": "BaseBdev4", 00:17:03.549 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:03.549 "is_configured": true, 00:17:03.549 "data_offset": 2048, 00:17:03.549 "data_size": 63488 00:17:03.549 } 00:17:03.549 ] 00:17:03.549 }' 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.549 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.549 [2024-11-20 08:50:34.387097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.807 [2024-11-20 08:50:34.534469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:03.807 [2024-11-20 08:50:34.538608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.807 [2024-11-20 08:50:34.538657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.807 [2024-11-20 08:50:34.538679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:03.807 [2024-11-20 08:50:34.579684] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.807 "name": "raid_bdev1", 00:17:03.807 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:03.807 "strip_size_kb": 0, 00:17:03.807 "state": "online", 00:17:03.807 "raid_level": "raid1", 00:17:03.807 "superblock": true, 00:17:03.807 "num_base_bdevs": 4, 00:17:03.807 "num_base_bdevs_discovered": 3, 00:17:03.807 "num_base_bdevs_operational": 3, 00:17:03.807 "base_bdevs_list": [ 00:17:03.807 { 00:17:03.807 "name": null, 00:17:03.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.807 "is_configured": false, 00:17:03.807 "data_offset": 0, 00:17:03.807 "data_size": 63488 00:17:03.807 }, 00:17:03.807 { 00:17:03.807 "name": "BaseBdev2", 00:17:03.807 "uuid": "fb8fbdb1-b953-5d30-99ae-85371a3d74a6", 00:17:03.807 "is_configured": true, 00:17:03.807 "data_offset": 2048, 00:17:03.807 "data_size": 63488 00:17:03.807 }, 00:17:03.807 { 00:17:03.807 "name": "BaseBdev3", 00:17:03.807 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:03.807 "is_configured": true, 00:17:03.807 "data_offset": 2048, 00:17:03.807 "data_size": 63488 00:17:03.807 }, 00:17:03.807 { 00:17:03.807 "name": "BaseBdev4", 00:17:03.807 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:03.807 "is_configured": true, 00:17:03.807 "data_offset": 2048, 00:17:03.807 "data_size": 63488 00:17:03.807 } 00:17:03.807 ] 00:17:03.807 }' 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.807 08:50:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.324 125.50 IOPS, 376.50 MiB/s [2024-11-20T08:50:35.240Z] 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.324 "name": "raid_bdev1", 00:17:04.324 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:04.324 "strip_size_kb": 0, 00:17:04.324 "state": "online", 00:17:04.324 "raid_level": "raid1", 00:17:04.324 "superblock": true, 00:17:04.324 "num_base_bdevs": 4, 00:17:04.324 "num_base_bdevs_discovered": 3, 00:17:04.324 "num_base_bdevs_operational": 3, 00:17:04.324 "base_bdevs_list": [ 00:17:04.324 { 00:17:04.324 "name": null, 00:17:04.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.324 "is_configured": false, 00:17:04.324 "data_offset": 0, 00:17:04.324 "data_size": 63488 00:17:04.324 }, 00:17:04.324 { 00:17:04.324 "name": "BaseBdev2", 00:17:04.324 "uuid": "fb8fbdb1-b953-5d30-99ae-85371a3d74a6", 00:17:04.324 "is_configured": true, 00:17:04.324 "data_offset": 2048, 00:17:04.324 "data_size": 63488 00:17:04.324 }, 00:17:04.324 { 00:17:04.324 "name": "BaseBdev3", 00:17:04.324 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:04.324 "is_configured": true, 00:17:04.324 "data_offset": 2048, 00:17:04.324 "data_size": 63488 00:17:04.324 }, 00:17:04.324 { 00:17:04.324 "name": "BaseBdev4", 00:17:04.324 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:04.324 "is_configured": true, 00:17:04.324 "data_offset": 2048, 00:17:04.324 "data_size": 63488 00:17:04.324 } 00:17:04.324 ] 00:17:04.324 }' 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.324 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.582 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.582 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.582 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.582 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 [2024-11-20 08:50:35.246145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.582 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.582 08:50:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:04.582 [2024-11-20 08:50:35.328616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:04.582 [2024-11-20 08:50:35.331266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.582 [2024-11-20 08:50:35.461128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:04.582 [2024-11-20 08:50:35.462029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:04.839 [2024-11-20 08:50:35.675837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:04.839 [2024-11-20 08:50:35.676752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:05.354 129.67 IOPS, 389.00 MiB/s [2024-11-20T08:50:36.270Z] [2024-11-20 08:50:36.132176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:05.354 [2024-11-20 08:50:36.132459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.612 "name": "raid_bdev1", 00:17:05.612 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:05.612 "strip_size_kb": 0, 00:17:05.612 "state": "online", 00:17:05.612 "raid_level": "raid1", 00:17:05.612 "superblock": true, 00:17:05.612 "num_base_bdevs": 4, 00:17:05.612 "num_base_bdevs_discovered": 4, 00:17:05.612 "num_base_bdevs_operational": 4, 00:17:05.612 "process": { 00:17:05.612 "type": "rebuild", 00:17:05.612 "target": "spare", 00:17:05.612 "progress": { 00:17:05.612 "blocks": 10240, 00:17:05.612 "percent": 16 00:17:05.612 } 00:17:05.612 }, 00:17:05.612 "base_bdevs_list": [ 00:17:05.612 { 00:17:05.612 "name": "spare", 00:17:05.612 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:05.612 "is_configured": true, 00:17:05.612 "data_offset": 2048, 00:17:05.612 "data_size": 63488 00:17:05.612 }, 00:17:05.612 { 00:17:05.612 "name": "BaseBdev2", 00:17:05.612 "uuid": "fb8fbdb1-b953-5d30-99ae-85371a3d74a6", 00:17:05.612 "is_configured": true, 00:17:05.612 "data_offset": 2048, 00:17:05.612 "data_size": 63488 00:17:05.612 }, 00:17:05.612 { 00:17:05.612 "name": "BaseBdev3", 00:17:05.612 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:05.612 "is_configured": true, 00:17:05.612 "data_offset": 2048, 00:17:05.612 "data_size": 63488 00:17:05.612 }, 00:17:05.612 { 00:17:05.612 "name": "BaseBdev4", 00:17:05.612 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:05.612 "is_configured": true, 00:17:05.612 "data_offset": 2048, 00:17:05.612 "data_size": 63488 00:17:05.612 } 00:17:05.612 ] 00:17:05.612 }' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:05.612 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.612 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.612 [2024-11-20 08:50:36.482312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.177 115.50 IOPS, 346.50 MiB/s [2024-11-20T08:50:37.093Z] [2024-11-20 08:50:36.822492] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:06.177 [2024-11-20 08:50:36.822674] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.177 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.178 "name": "raid_bdev1", 00:17:06.178 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:06.178 "strip_size_kb": 0, 00:17:06.178 "state": "online", 00:17:06.178 "raid_level": "raid1", 00:17:06.178 "superblock": true, 00:17:06.178 "num_base_bdevs": 4, 00:17:06.178 "num_base_bdevs_discovered": 3, 00:17:06.178 "num_base_bdevs_operational": 3, 00:17:06.178 "process": { 00:17:06.178 "type": "rebuild", 00:17:06.178 "target": "spare", 00:17:06.178 "progress": { 00:17:06.178 "blocks": 16384, 00:17:06.178 "percent": 25 00:17:06.178 } 00:17:06.178 }, 00:17:06.178 "base_bdevs_list": [ 00:17:06.178 { 00:17:06.178 "name": "spare", 00:17:06.178 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:06.178 "is_configured": true, 00:17:06.178 "data_offset": 2048, 00:17:06.178 "data_size": 63488 00:17:06.178 }, 00:17:06.178 { 00:17:06.178 "name": null, 00:17:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.178 "is_configured": false, 00:17:06.178 "data_offset": 0, 00:17:06.178 "data_size": 63488 00:17:06.178 }, 00:17:06.178 { 00:17:06.178 "name": "BaseBdev3", 00:17:06.178 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:06.178 "is_configured": true, 00:17:06.178 "data_offset": 2048, 00:17:06.178 "data_size": 63488 00:17:06.178 }, 00:17:06.178 { 00:17:06.178 "name": "BaseBdev4", 00:17:06.178 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:06.178 "is_configured": true, 00:17:06.178 "data_offset": 2048, 00:17:06.178 "data_size": 63488 00:17:06.178 } 00:17:06.178 ] 00:17:06.178 }' 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.178 08:50:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.178 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.178 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.178 "name": "raid_bdev1", 00:17:06.178 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:06.178 "strip_size_kb": 0, 00:17:06.178 "state": "online", 00:17:06.178 "raid_level": "raid1", 00:17:06.178 "superblock": true, 00:17:06.178 "num_base_bdevs": 4, 00:17:06.178 "num_base_bdevs_discovered": 3, 00:17:06.178 "num_base_bdevs_operational": 3, 00:17:06.178 "process": { 00:17:06.178 "type": "rebuild", 00:17:06.178 "target": "spare", 00:17:06.178 "progress": { 00:17:06.178 "blocks": 18432, 00:17:06.178 "percent": 29 00:17:06.178 } 00:17:06.178 }, 00:17:06.178 "base_bdevs_list": [ 00:17:06.178 { 00:17:06.178 "name": "spare", 00:17:06.178 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:06.178 "is_configured": true, 00:17:06.178 "data_offset": 2048, 00:17:06.178 "data_size": 63488 00:17:06.178 }, 00:17:06.178 { 00:17:06.178 "name": null, 00:17:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.178 "is_configured": false, 00:17:06.178 "data_offset": 0, 00:17:06.178 "data_size": 63488 00:17:06.178 }, 00:17:06.178 { 00:17:06.178 "name": "BaseBdev3", 00:17:06.178 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:06.178 "is_configured": true, 00:17:06.178 "data_offset": 2048, 00:17:06.178 "data_size": 63488 00:17:06.178 }, 00:17:06.178 { 00:17:06.178 "name": "BaseBdev4", 00:17:06.178 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:06.178 "is_configured": true, 00:17:06.178 "data_offset": 2048, 00:17:06.178 "data_size": 63488 00:17:06.178 } 00:17:06.178 ] 00:17:06.178 }' 00:17:06.178 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.178 [2024-11-20 08:50:37.074306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:06.178 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.178 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.436 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.436 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.436 [2024-11-20 08:50:37.177336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:06.693 [2024-11-20 08:50:37.407075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:06.950 [2024-11-20 08:50:37.648109] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:07.208 101.00 IOPS, 303.00 MiB/s [2024-11-20T08:50:38.124Z] [2024-11-20 08:50:37.974442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.466 "name": "raid_bdev1", 00:17:07.466 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:07.466 "strip_size_kb": 0, 00:17:07.466 "state": "online", 00:17:07.466 "raid_level": "raid1", 00:17:07.466 "superblock": true, 00:17:07.466 "num_base_bdevs": 4, 00:17:07.466 "num_base_bdevs_discovered": 3, 00:17:07.466 "num_base_bdevs_operational": 3, 00:17:07.466 "process": { 00:17:07.466 "type": "rebuild", 00:17:07.466 "target": "spare", 00:17:07.466 "progress": { 00:17:07.466 "blocks": 34816, 00:17:07.466 "percent": 54 00:17:07.466 } 00:17:07.466 }, 00:17:07.466 "base_bdevs_list": [ 00:17:07.466 { 00:17:07.466 "name": "spare", 00:17:07.466 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:07.466 "is_configured": true, 00:17:07.466 "data_offset": 2048, 00:17:07.466 "data_size": 63488 00:17:07.466 }, 00:17:07.466 { 00:17:07.466 "name": null, 00:17:07.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.466 "is_configured": false, 00:17:07.466 "data_offset": 0, 00:17:07.466 "data_size": 63488 00:17:07.466 }, 00:17:07.466 { 00:17:07.466 "name": "BaseBdev3", 00:17:07.466 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:07.466 "is_configured": true, 00:17:07.466 "data_offset": 2048, 00:17:07.466 "data_size": 63488 00:17:07.466 }, 00:17:07.466 { 00:17:07.466 "name": "BaseBdev4", 00:17:07.466 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:07.466 "is_configured": true, 00:17:07.466 "data_offset": 2048, 00:17:07.466 "data_size": 63488 00:17:07.466 } 00:17:07.466 ] 00:17:07.466 }' 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.466 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.597 91.67 IOPS, 275.00 MiB/s [2024-11-20T08:50:39.513Z] 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.597 "name": "raid_bdev1", 00:17:08.597 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:08.597 "strip_size_kb": 0, 00:17:08.597 "state": "online", 00:17:08.597 "raid_level": "raid1", 00:17:08.597 "superblock": true, 00:17:08.597 "num_base_bdevs": 4, 00:17:08.597 "num_base_bdevs_discovered": 3, 00:17:08.597 "num_base_bdevs_operational": 3, 00:17:08.597 "process": { 00:17:08.597 "type": "rebuild", 00:17:08.597 "target": "spare", 00:17:08.597 "progress": { 00:17:08.597 "blocks": 57344, 00:17:08.597 "percent": 90 00:17:08.597 } 00:17:08.597 }, 00:17:08.597 "base_bdevs_list": [ 00:17:08.597 { 00:17:08.597 "name": "spare", 00:17:08.597 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:08.597 "is_configured": true, 00:17:08.597 "data_offset": 2048, 00:17:08.597 "data_size": 63488 00:17:08.597 }, 00:17:08.597 { 00:17:08.597 "name": null, 00:17:08.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.597 "is_configured": false, 00:17:08.597 "data_offset": 0, 00:17:08.597 "data_size": 63488 00:17:08.597 }, 00:17:08.597 { 00:17:08.597 "name": "BaseBdev3", 00:17:08.597 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:08.597 "is_configured": true, 00:17:08.597 "data_offset": 2048, 00:17:08.597 "data_size": 63488 00:17:08.597 }, 00:17:08.597 { 00:17:08.597 "name": "BaseBdev4", 00:17:08.597 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:08.597 "is_configured": true, 00:17:08.597 "data_offset": 2048, 00:17:08.597 "data_size": 63488 00:17:08.597 } 00:17:08.597 ] 00:17:08.597 }' 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.597 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.856 [2024-11-20 08:50:39.647436] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:08.856 [2024-11-20 08:50:39.755084] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:08.856 [2024-11-20 08:50:39.759500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.688 84.29 IOPS, 252.86 MiB/s [2024-11-20T08:50:40.604Z] 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.688 "name": "raid_bdev1", 00:17:09.688 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:09.688 "strip_size_kb": 0, 00:17:09.688 "state": "online", 00:17:09.688 "raid_level": "raid1", 00:17:09.688 "superblock": true, 00:17:09.688 "num_base_bdevs": 4, 00:17:09.688 "num_base_bdevs_discovered": 3, 00:17:09.688 "num_base_bdevs_operational": 3, 00:17:09.688 "base_bdevs_list": [ 00:17:09.688 { 00:17:09.688 "name": "spare", 00:17:09.688 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:09.688 "is_configured": true, 00:17:09.688 "data_offset": 2048, 00:17:09.688 "data_size": 63488 00:17:09.688 }, 00:17:09.688 { 00:17:09.688 "name": null, 00:17:09.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.688 "is_configured": false, 00:17:09.688 "data_offset": 0, 00:17:09.688 "data_size": 63488 00:17:09.688 }, 00:17:09.688 { 00:17:09.688 "name": "BaseBdev3", 00:17:09.688 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:09.688 "is_configured": true, 00:17:09.688 "data_offset": 2048, 00:17:09.688 "data_size": 63488 00:17:09.688 }, 00:17:09.688 { 00:17:09.688 "name": "BaseBdev4", 00:17:09.688 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:09.688 "is_configured": true, 00:17:09.688 "data_offset": 2048, 00:17:09.688 "data_size": 63488 00:17:09.688 } 00:17:09.688 ] 00:17:09.688 }' 00:17:09.688 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.946 "name": "raid_bdev1", 00:17:09.946 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:09.946 "strip_size_kb": 0, 00:17:09.946 "state": "online", 00:17:09.946 "raid_level": "raid1", 00:17:09.946 "superblock": true, 00:17:09.946 "num_base_bdevs": 4, 00:17:09.946 "num_base_bdevs_discovered": 3, 00:17:09.946 "num_base_bdevs_operational": 3, 00:17:09.946 "base_bdevs_list": [ 00:17:09.946 { 00:17:09.946 "name": "spare", 00:17:09.946 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:09.946 "is_configured": true, 00:17:09.946 "data_offset": 2048, 00:17:09.946 "data_size": 63488 00:17:09.946 }, 00:17:09.946 { 00:17:09.946 "name": null, 00:17:09.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.946 "is_configured": false, 00:17:09.946 "data_offset": 0, 00:17:09.946 "data_size": 63488 00:17:09.946 }, 00:17:09.946 { 00:17:09.946 "name": "BaseBdev3", 00:17:09.946 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:09.946 "is_configured": true, 00:17:09.946 "data_offset": 2048, 00:17:09.946 "data_size": 63488 00:17:09.946 }, 00:17:09.946 { 00:17:09.946 "name": "BaseBdev4", 00:17:09.946 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:09.946 "is_configured": true, 00:17:09.946 "data_offset": 2048, 00:17:09.946 "data_size": 63488 00:17:09.946 } 00:17:09.946 ] 00:17:09.946 }' 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.946 76.88 IOPS, 230.62 MiB/s [2024-11-20T08:50:40.862Z] 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.946 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.205 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.205 "name": "raid_bdev1", 00:17:10.205 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:10.205 "strip_size_kb": 0, 00:17:10.205 "state": "online", 00:17:10.205 "raid_level": "raid1", 00:17:10.205 "superblock": true, 00:17:10.205 "num_base_bdevs": 4, 00:17:10.205 "num_base_bdevs_discovered": 3, 00:17:10.205 "num_base_bdevs_operational": 3, 00:17:10.205 "base_bdevs_list": [ 00:17:10.205 { 00:17:10.205 "name": "spare", 00:17:10.205 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:10.205 "is_configured": true, 00:17:10.205 "data_offset": 2048, 00:17:10.205 "data_size": 63488 00:17:10.205 }, 00:17:10.205 { 00:17:10.205 "name": null, 00:17:10.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.205 "is_configured": false, 00:17:10.205 "data_offset": 0, 00:17:10.205 "data_size": 63488 00:17:10.205 }, 00:17:10.205 { 00:17:10.205 "name": "BaseBdev3", 00:17:10.205 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:10.205 "is_configured": true, 00:17:10.205 "data_offset": 2048, 00:17:10.205 "data_size": 63488 00:17:10.205 }, 00:17:10.205 { 00:17:10.205 "name": "BaseBdev4", 00:17:10.205 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:10.205 "is_configured": true, 00:17:10.205 "data_offset": 2048, 00:17:10.205 "data_size": 63488 00:17:10.205 } 00:17:10.205 ] 00:17:10.205 }' 00:17:10.205 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.205 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.463 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.463 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.463 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.463 [2024-11-20 08:50:41.328297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.463 [2024-11-20 08:50:41.328465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.721 00:17:10.721 Latency(us) 00:17:10.721 [2024-11-20T08:50:41.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.721 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:10.721 raid_bdev1 : 8.64 73.81 221.44 0.00 0.00 19135.28 299.75 114390.11 00:17:10.721 [2024-11-20T08:50:41.637Z] =================================================================================================================== 00:17:10.721 [2024-11-20T08:50:41.637Z] Total : 73.81 221.44 0.00 0.00 19135.28 299.75 114390.11 00:17:10.721 { 00:17:10.721 "results": [ 00:17:10.721 { 00:17:10.721 "job": "raid_bdev1", 00:17:10.721 "core_mask": "0x1", 00:17:10.721 "workload": "randrw", 00:17:10.721 "percentage": 50, 00:17:10.721 "status": "finished", 00:17:10.721 "queue_depth": 2, 00:17:10.721 "io_size": 3145728, 00:17:10.721 "runtime": 8.643532, 00:17:10.721 "iops": 73.81241834935071, 00:17:10.721 "mibps": 221.43725504805212, 00:17:10.721 "io_failed": 0, 00:17:10.721 "io_timeout": 0, 00:17:10.721 "avg_latency_us": 19135.277013394127, 00:17:10.721 "min_latency_us": 299.75272727272727, 00:17:10.721 "max_latency_us": 114390.10909090909 00:17:10.721 } 00:17:10.721 ], 00:17:10.721 "core_count": 1 00:17:10.721 } 00:17:10.721 [2024-11-20 08:50:41.431803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.721 [2024-11-20 08:50:41.431862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.721 [2024-11-20 08:50:41.431993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.721 [2024-11-20 08:50:41.432011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.721 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:10.980 /dev/nbd0 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.980 1+0 records in 00:17:10.980 1+0 records out 00:17:10.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319784 s, 12.8 MB/s 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.980 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:11.240 /dev/nbd1 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.240 1+0 records in 00:17:11.240 1+0 records out 00:17:11.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033663 s, 12.2 MB/s 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:11.240 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.498 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.757 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:12.015 /dev/nbd1 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.274 1+0 records in 00:17:12.274 1+0 records out 00:17:12.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296283 s, 13.8 MB/s 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.274 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.274 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.532 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.792 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.051 [2024-11-20 08:50:43.708055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.051 [2024-11-20 08:50:43.708128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.051 [2024-11-20 08:50:43.708178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:13.051 [2024-11-20 08:50:43.708206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.051 [2024-11-20 08:50:43.711064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.051 [2024-11-20 08:50:43.711105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.051 [2024-11-20 08:50:43.711242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.051 [2024-11-20 08:50:43.711319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.051 [2024-11-20 08:50:43.711494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.051 [2024-11-20 08:50:43.711645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.051 spare 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.051 [2024-11-20 08:50:43.811790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:13.051 [2024-11-20 08:50:43.811860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:13.051 [2024-11-20 08:50:43.812340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:13.051 [2024-11-20 08:50:43.812617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:13.051 [2024-11-20 08:50:43.812653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:13.051 [2024-11-20 08:50:43.812910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.051 "name": "raid_bdev1", 00:17:13.051 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:13.051 "strip_size_kb": 0, 00:17:13.051 "state": "online", 00:17:13.051 "raid_level": "raid1", 00:17:13.051 "superblock": true, 00:17:13.051 "num_base_bdevs": 4, 00:17:13.051 "num_base_bdevs_discovered": 3, 00:17:13.051 "num_base_bdevs_operational": 3, 00:17:13.051 "base_bdevs_list": [ 00:17:13.051 { 00:17:13.051 "name": "spare", 00:17:13.051 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:13.051 "is_configured": true, 00:17:13.051 "data_offset": 2048, 00:17:13.051 "data_size": 63488 00:17:13.051 }, 00:17:13.051 { 00:17:13.051 "name": null, 00:17:13.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.051 "is_configured": false, 00:17:13.051 "data_offset": 2048, 00:17:13.051 "data_size": 63488 00:17:13.051 }, 00:17:13.051 { 00:17:13.051 "name": "BaseBdev3", 00:17:13.051 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:13.051 "is_configured": true, 00:17:13.051 "data_offset": 2048, 00:17:13.051 "data_size": 63488 00:17:13.051 }, 00:17:13.051 { 00:17:13.051 "name": "BaseBdev4", 00:17:13.051 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:13.051 "is_configured": true, 00:17:13.051 "data_offset": 2048, 00:17:13.051 "data_size": 63488 00:17:13.051 } 00:17:13.051 ] 00:17:13.051 }' 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.051 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.622 "name": "raid_bdev1", 00:17:13.622 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:13.622 "strip_size_kb": 0, 00:17:13.622 "state": "online", 00:17:13.622 "raid_level": "raid1", 00:17:13.622 "superblock": true, 00:17:13.622 "num_base_bdevs": 4, 00:17:13.622 "num_base_bdevs_discovered": 3, 00:17:13.622 "num_base_bdevs_operational": 3, 00:17:13.622 "base_bdevs_list": [ 00:17:13.622 { 00:17:13.622 "name": "spare", 00:17:13.622 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:13.622 "is_configured": true, 00:17:13.622 "data_offset": 2048, 00:17:13.622 "data_size": 63488 00:17:13.622 }, 00:17:13.622 { 00:17:13.622 "name": null, 00:17:13.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.622 "is_configured": false, 00:17:13.622 "data_offset": 2048, 00:17:13.622 "data_size": 63488 00:17:13.622 }, 00:17:13.622 { 00:17:13.622 "name": "BaseBdev3", 00:17:13.622 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:13.622 "is_configured": true, 00:17:13.622 "data_offset": 2048, 00:17:13.622 "data_size": 63488 00:17:13.622 }, 00:17:13.622 { 00:17:13.622 "name": "BaseBdev4", 00:17:13.622 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:13.622 "is_configured": true, 00:17:13.622 "data_offset": 2048, 00:17:13.622 "data_size": 63488 00:17:13.622 } 00:17:13.622 ] 00:17:13.622 }' 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.622 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.879 [2024-11-20 08:50:44.553269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.879 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.879 "name": "raid_bdev1", 00:17:13.879 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:13.879 "strip_size_kb": 0, 00:17:13.879 "state": "online", 00:17:13.879 "raid_level": "raid1", 00:17:13.879 "superblock": true, 00:17:13.879 "num_base_bdevs": 4, 00:17:13.879 "num_base_bdevs_discovered": 2, 00:17:13.879 "num_base_bdevs_operational": 2, 00:17:13.879 "base_bdevs_list": [ 00:17:13.879 { 00:17:13.879 "name": null, 00:17:13.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.879 "is_configured": false, 00:17:13.879 "data_offset": 0, 00:17:13.879 "data_size": 63488 00:17:13.879 }, 00:17:13.880 { 00:17:13.880 "name": null, 00:17:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.880 "is_configured": false, 00:17:13.880 "data_offset": 2048, 00:17:13.880 "data_size": 63488 00:17:13.880 }, 00:17:13.880 { 00:17:13.880 "name": "BaseBdev3", 00:17:13.880 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:13.880 "is_configured": true, 00:17:13.880 "data_offset": 2048, 00:17:13.880 "data_size": 63488 00:17:13.880 }, 00:17:13.880 { 00:17:13.880 "name": "BaseBdev4", 00:17:13.880 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:13.880 "is_configured": true, 00:17:13.880 "data_offset": 2048, 00:17:13.880 "data_size": 63488 00:17:13.880 } 00:17:13.880 ] 00:17:13.880 }' 00:17:13.880 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.880 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.446 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.446 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.446 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.446 [2024-11-20 08:50:45.077469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.446 [2024-11-20 08:50:45.077700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:14.446 [2024-11-20 08:50:45.077722] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.446 [2024-11-20 08:50:45.077768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.446 [2024-11-20 08:50:45.091675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:14.446 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.446 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.446 [2024-11-20 08:50:45.094208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.381 "name": "raid_bdev1", 00:17:15.381 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:15.381 "strip_size_kb": 0, 00:17:15.381 "state": "online", 00:17:15.381 "raid_level": "raid1", 00:17:15.381 "superblock": true, 00:17:15.381 "num_base_bdevs": 4, 00:17:15.381 "num_base_bdevs_discovered": 3, 00:17:15.381 "num_base_bdevs_operational": 3, 00:17:15.381 "process": { 00:17:15.381 "type": "rebuild", 00:17:15.381 "target": "spare", 00:17:15.381 "progress": { 00:17:15.381 "blocks": 20480, 00:17:15.381 "percent": 32 00:17:15.381 } 00:17:15.381 }, 00:17:15.381 "base_bdevs_list": [ 00:17:15.381 { 00:17:15.381 "name": "spare", 00:17:15.381 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:15.381 "is_configured": true, 00:17:15.381 "data_offset": 2048, 00:17:15.381 "data_size": 63488 00:17:15.381 }, 00:17:15.381 { 00:17:15.381 "name": null, 00:17:15.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.381 "is_configured": false, 00:17:15.381 "data_offset": 2048, 00:17:15.381 "data_size": 63488 00:17:15.381 }, 00:17:15.381 { 00:17:15.381 "name": "BaseBdev3", 00:17:15.381 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:15.381 "is_configured": true, 00:17:15.381 "data_offset": 2048, 00:17:15.381 "data_size": 63488 00:17:15.381 }, 00:17:15.381 { 00:17:15.381 "name": "BaseBdev4", 00:17:15.381 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:15.381 "is_configured": true, 00:17:15.381 "data_offset": 2048, 00:17:15.381 "data_size": 63488 00:17:15.381 } 00:17:15.381 ] 00:17:15.381 }' 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.381 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.381 [2024-11-20 08:50:46.263789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.640 [2024-11-20 08:50:46.302838] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.640 [2024-11-20 08:50:46.302906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.640 [2024-11-20 08:50:46.302933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.640 [2024-11-20 08:50:46.302944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.640 "name": "raid_bdev1", 00:17:15.640 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:15.640 "strip_size_kb": 0, 00:17:15.640 "state": "online", 00:17:15.640 "raid_level": "raid1", 00:17:15.640 "superblock": true, 00:17:15.640 "num_base_bdevs": 4, 00:17:15.640 "num_base_bdevs_discovered": 2, 00:17:15.640 "num_base_bdevs_operational": 2, 00:17:15.640 "base_bdevs_list": [ 00:17:15.640 { 00:17:15.640 "name": null, 00:17:15.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.640 "is_configured": false, 00:17:15.640 "data_offset": 0, 00:17:15.640 "data_size": 63488 00:17:15.640 }, 00:17:15.640 { 00:17:15.640 "name": null, 00:17:15.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.640 "is_configured": false, 00:17:15.640 "data_offset": 2048, 00:17:15.640 "data_size": 63488 00:17:15.640 }, 00:17:15.640 { 00:17:15.640 "name": "BaseBdev3", 00:17:15.640 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:15.640 "is_configured": true, 00:17:15.640 "data_offset": 2048, 00:17:15.640 "data_size": 63488 00:17:15.640 }, 00:17:15.640 { 00:17:15.640 "name": "BaseBdev4", 00:17:15.640 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:15.640 "is_configured": true, 00:17:15.640 "data_offset": 2048, 00:17:15.640 "data_size": 63488 00:17:15.640 } 00:17:15.640 ] 00:17:15.640 }' 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.640 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.207 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.207 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.207 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.207 [2024-11-20 08:50:46.821939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.207 [2024-11-20 08:50:46.822024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.207 [2024-11-20 08:50:46.822059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:16.207 [2024-11-20 08:50:46.822075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.207 [2024-11-20 08:50:46.822677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.207 [2024-11-20 08:50:46.822710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.207 [2024-11-20 08:50:46.822842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.207 [2024-11-20 08:50:46.822861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:16.207 [2024-11-20 08:50:46.822880] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.207 [2024-11-20 08:50:46.822908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.207 [2024-11-20 08:50:46.837059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:16.207 spare 00:17:16.207 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.207 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:16.207 [2024-11-20 08:50:46.839486] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.143 "name": "raid_bdev1", 00:17:17.143 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:17.143 "strip_size_kb": 0, 00:17:17.143 "state": "online", 00:17:17.143 "raid_level": "raid1", 00:17:17.143 "superblock": true, 00:17:17.143 "num_base_bdevs": 4, 00:17:17.143 "num_base_bdevs_discovered": 3, 00:17:17.143 "num_base_bdevs_operational": 3, 00:17:17.143 "process": { 00:17:17.143 "type": "rebuild", 00:17:17.143 "target": "spare", 00:17:17.143 "progress": { 00:17:17.143 "blocks": 20480, 00:17:17.143 "percent": 32 00:17:17.143 } 00:17:17.143 }, 00:17:17.143 "base_bdevs_list": [ 00:17:17.143 { 00:17:17.143 "name": "spare", 00:17:17.143 "uuid": "5d98812f-0639-53b7-a0df-13b475982b87", 00:17:17.143 "is_configured": true, 00:17:17.143 "data_offset": 2048, 00:17:17.143 "data_size": 63488 00:17:17.143 }, 00:17:17.143 { 00:17:17.143 "name": null, 00:17:17.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.143 "is_configured": false, 00:17:17.143 "data_offset": 2048, 00:17:17.143 "data_size": 63488 00:17:17.143 }, 00:17:17.143 { 00:17:17.143 "name": "BaseBdev3", 00:17:17.143 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:17.143 "is_configured": true, 00:17:17.143 "data_offset": 2048, 00:17:17.143 "data_size": 63488 00:17:17.143 }, 00:17:17.143 { 00:17:17.143 "name": "BaseBdev4", 00:17:17.143 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:17.143 "is_configured": true, 00:17:17.143 "data_offset": 2048, 00:17:17.143 "data_size": 63488 00:17:17.143 } 00:17:17.143 ] 00:17:17.143 }' 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.143 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.143 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.143 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.143 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.144 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.144 [2024-11-20 08:50:48.013098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.144 [2024-11-20 08:50:48.048183] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.144 [2024-11-20 08:50:48.048312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.144 [2024-11-20 08:50:48.048339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.144 [2024-11-20 08:50:48.048354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.402 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.402 "name": "raid_bdev1", 00:17:17.402 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:17.402 "strip_size_kb": 0, 00:17:17.402 "state": "online", 00:17:17.402 "raid_level": "raid1", 00:17:17.402 "superblock": true, 00:17:17.402 "num_base_bdevs": 4, 00:17:17.402 "num_base_bdevs_discovered": 2, 00:17:17.402 "num_base_bdevs_operational": 2, 00:17:17.402 "base_bdevs_list": [ 00:17:17.402 { 00:17:17.402 "name": null, 00:17:17.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.402 "is_configured": false, 00:17:17.402 "data_offset": 0, 00:17:17.402 "data_size": 63488 00:17:17.403 }, 00:17:17.403 { 00:17:17.403 "name": null, 00:17:17.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.403 "is_configured": false, 00:17:17.403 "data_offset": 2048, 00:17:17.403 "data_size": 63488 00:17:17.403 }, 00:17:17.403 { 00:17:17.403 "name": "BaseBdev3", 00:17:17.403 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:17.403 "is_configured": true, 00:17:17.403 "data_offset": 2048, 00:17:17.403 "data_size": 63488 00:17:17.403 }, 00:17:17.403 { 00:17:17.403 "name": "BaseBdev4", 00:17:17.403 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:17.403 "is_configured": true, 00:17:17.403 "data_offset": 2048, 00:17:17.403 "data_size": 63488 00:17:17.403 } 00:17:17.403 ] 00:17:17.403 }' 00:17:17.403 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.403 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.970 "name": "raid_bdev1", 00:17:17.970 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:17.970 "strip_size_kb": 0, 00:17:17.970 "state": "online", 00:17:17.970 "raid_level": "raid1", 00:17:17.970 "superblock": true, 00:17:17.970 "num_base_bdevs": 4, 00:17:17.970 "num_base_bdevs_discovered": 2, 00:17:17.970 "num_base_bdevs_operational": 2, 00:17:17.970 "base_bdevs_list": [ 00:17:17.970 { 00:17:17.970 "name": null, 00:17:17.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.970 "is_configured": false, 00:17:17.970 "data_offset": 0, 00:17:17.970 "data_size": 63488 00:17:17.970 }, 00:17:17.970 { 00:17:17.970 "name": null, 00:17:17.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.970 "is_configured": false, 00:17:17.970 "data_offset": 2048, 00:17:17.970 "data_size": 63488 00:17:17.970 }, 00:17:17.970 { 00:17:17.970 "name": "BaseBdev3", 00:17:17.970 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:17.970 "is_configured": true, 00:17:17.970 "data_offset": 2048, 00:17:17.970 "data_size": 63488 00:17:17.970 }, 00:17:17.970 { 00:17:17.970 "name": "BaseBdev4", 00:17:17.970 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:17.970 "is_configured": true, 00:17:17.970 "data_offset": 2048, 00:17:17.970 "data_size": 63488 00:17:17.970 } 00:17:17.970 ] 00:17:17.970 }' 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.970 [2024-11-20 08:50:48.791652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.970 [2024-11-20 08:50:48.791712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.970 [2024-11-20 08:50:48.791739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:17.970 [2024-11-20 08:50:48.791759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.970 [2024-11-20 08:50:48.792355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.970 [2024-11-20 08:50:48.792395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.970 [2024-11-20 08:50:48.792495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.970 [2024-11-20 08:50:48.792521] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:17.970 [2024-11-20 08:50:48.792536] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.970 [2024-11-20 08:50:48.792552] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.970 BaseBdev1 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.970 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.905 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.164 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.164 "name": "raid_bdev1", 00:17:19.164 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:19.164 "strip_size_kb": 0, 00:17:19.164 "state": "online", 00:17:19.164 "raid_level": "raid1", 00:17:19.164 "superblock": true, 00:17:19.164 "num_base_bdevs": 4, 00:17:19.164 "num_base_bdevs_discovered": 2, 00:17:19.164 "num_base_bdevs_operational": 2, 00:17:19.164 "base_bdevs_list": [ 00:17:19.164 { 00:17:19.164 "name": null, 00:17:19.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.164 "is_configured": false, 00:17:19.164 "data_offset": 0, 00:17:19.164 "data_size": 63488 00:17:19.164 }, 00:17:19.164 { 00:17:19.164 "name": null, 00:17:19.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.164 "is_configured": false, 00:17:19.164 "data_offset": 2048, 00:17:19.164 "data_size": 63488 00:17:19.164 }, 00:17:19.164 { 00:17:19.164 "name": "BaseBdev3", 00:17:19.164 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:19.164 "is_configured": true, 00:17:19.164 "data_offset": 2048, 00:17:19.164 "data_size": 63488 00:17:19.164 }, 00:17:19.164 { 00:17:19.164 "name": "BaseBdev4", 00:17:19.164 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:19.164 "is_configured": true, 00:17:19.164 "data_offset": 2048, 00:17:19.164 "data_size": 63488 00:17:19.164 } 00:17:19.164 ] 00:17:19.164 }' 00:17:19.164 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.164 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.422 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.682 "name": "raid_bdev1", 00:17:19.682 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:19.682 "strip_size_kb": 0, 00:17:19.682 "state": "online", 00:17:19.682 "raid_level": "raid1", 00:17:19.682 "superblock": true, 00:17:19.682 "num_base_bdevs": 4, 00:17:19.682 "num_base_bdevs_discovered": 2, 00:17:19.682 "num_base_bdevs_operational": 2, 00:17:19.682 "base_bdevs_list": [ 00:17:19.682 { 00:17:19.682 "name": null, 00:17:19.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.682 "is_configured": false, 00:17:19.682 "data_offset": 0, 00:17:19.682 "data_size": 63488 00:17:19.682 }, 00:17:19.682 { 00:17:19.682 "name": null, 00:17:19.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.682 "is_configured": false, 00:17:19.682 "data_offset": 2048, 00:17:19.682 "data_size": 63488 00:17:19.682 }, 00:17:19.682 { 00:17:19.682 "name": "BaseBdev3", 00:17:19.682 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:19.682 "is_configured": true, 00:17:19.682 "data_offset": 2048, 00:17:19.682 "data_size": 63488 00:17:19.682 }, 00:17:19.682 { 00:17:19.682 "name": "BaseBdev4", 00:17:19.682 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:19.682 "is_configured": true, 00:17:19.682 "data_offset": 2048, 00:17:19.682 "data_size": 63488 00:17:19.682 } 00:17:19.682 ] 00:17:19.682 }' 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.682 [2024-11-20 08:50:50.484477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.682 [2024-11-20 08:50:50.484737] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:19.682 [2024-11-20 08:50:50.484765] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.682 request: 00:17:19.682 { 00:17:19.682 "base_bdev": "BaseBdev1", 00:17:19.682 "raid_bdev": "raid_bdev1", 00:17:19.682 "method": "bdev_raid_add_base_bdev", 00:17:19.682 "req_id": 1 00:17:19.682 } 00:17:19.682 Got JSON-RPC error response 00:17:19.682 response: 00:17:19.682 { 00:17:19.682 "code": -22, 00:17:19.682 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.682 } 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.682 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.879 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.879 "name": "raid_bdev1", 00:17:20.879 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:20.879 "strip_size_kb": 0, 00:17:20.879 "state": "online", 00:17:20.879 "raid_level": "raid1", 00:17:20.879 "superblock": true, 00:17:20.879 "num_base_bdevs": 4, 00:17:20.879 "num_base_bdevs_discovered": 2, 00:17:20.879 "num_base_bdevs_operational": 2, 00:17:20.879 "base_bdevs_list": [ 00:17:20.879 { 00:17:20.879 "name": null, 00:17:20.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.879 "is_configured": false, 00:17:20.879 "data_offset": 0, 00:17:20.879 "data_size": 63488 00:17:20.879 }, 00:17:20.879 { 00:17:20.879 "name": null, 00:17:20.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.879 "is_configured": false, 00:17:20.879 "data_offset": 2048, 00:17:20.879 "data_size": 63488 00:17:20.879 }, 00:17:20.879 { 00:17:20.879 "name": "BaseBdev3", 00:17:20.879 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:20.879 "is_configured": true, 00:17:20.879 "data_offset": 2048, 00:17:20.879 "data_size": 63488 00:17:20.879 }, 00:17:20.879 { 00:17:20.879 "name": "BaseBdev4", 00:17:20.879 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:20.879 "is_configured": true, 00:17:20.879 "data_offset": 2048, 00:17:20.879 "data_size": 63488 00:17:20.879 } 00:17:20.879 ] 00:17:20.879 }' 00:17:20.879 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.879 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.138 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.397 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.397 "name": "raid_bdev1", 00:17:21.397 "uuid": "b381c114-db68-452b-89e9-5efcc387e458", 00:17:21.397 "strip_size_kb": 0, 00:17:21.397 "state": "online", 00:17:21.397 "raid_level": "raid1", 00:17:21.397 "superblock": true, 00:17:21.397 "num_base_bdevs": 4, 00:17:21.397 "num_base_bdevs_discovered": 2, 00:17:21.397 "num_base_bdevs_operational": 2, 00:17:21.397 "base_bdevs_list": [ 00:17:21.397 { 00:17:21.397 "name": null, 00:17:21.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.397 "is_configured": false, 00:17:21.397 "data_offset": 0, 00:17:21.397 "data_size": 63488 00:17:21.397 }, 00:17:21.397 { 00:17:21.397 "name": null, 00:17:21.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.398 "is_configured": false, 00:17:21.398 "data_offset": 2048, 00:17:21.398 "data_size": 63488 00:17:21.398 }, 00:17:21.398 { 00:17:21.398 "name": "BaseBdev3", 00:17:21.398 "uuid": "6d319d39-6dba-53ff-8e1c-d23164a71b72", 00:17:21.398 "is_configured": true, 00:17:21.398 "data_offset": 2048, 00:17:21.398 "data_size": 63488 00:17:21.398 }, 00:17:21.398 { 00:17:21.398 "name": "BaseBdev4", 00:17:21.398 "uuid": "04159e6b-0c5a-5bf6-b944-822a8b8f3b97", 00:17:21.398 "is_configured": true, 00:17:21.398 "data_offset": 2048, 00:17:21.398 "data_size": 63488 00:17:21.398 } 00:17:21.398 ] 00:17:21.398 }' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79503 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79503 ']' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79503 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79503 00:17:21.398 killing process with pid 79503 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79503' 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79503 00:17:21.398 Received shutdown signal, test time was about 19.430639 seconds 00:17:21.398 00:17:21.398 Latency(us) 00:17:21.398 [2024-11-20T08:50:52.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.398 [2024-11-20T08:50:52.314Z] =================================================================================================================== 00:17:21.398 [2024-11-20T08:50:52.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.398 [2024-11-20 08:50:52.199337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.398 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79503 00:17:21.398 [2024-11-20 08:50:52.199481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.398 [2024-11-20 08:50:52.199572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.398 [2024-11-20 08:50:52.199589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.656 [2024-11-20 08:50:52.568536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.033 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.033 00:17:23.033 real 0m22.945s 00:17:23.033 user 0m31.215s 00:17:23.033 sys 0m2.307s 00:17:23.033 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.033 ************************************ 00:17:23.033 END TEST raid_rebuild_test_sb_io 00:17:23.033 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.033 ************************************ 00:17:23.033 08:50:53 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:23.033 08:50:53 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:23.033 08:50:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:23.033 08:50:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.033 08:50:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.033 ************************************ 00:17:23.033 START TEST raid5f_state_function_test 00:17:23.033 ************************************ 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:23.033 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80242 00:17:23.034 Process raid pid: 80242 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80242' 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80242 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80242 ']' 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.034 08:50:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.034 [2024-11-20 08:50:53.815242] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:23.034 [2024-11-20 08:50:53.815434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.293 [2024-11-20 08:50:54.006868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.293 [2024-11-20 08:50:54.162107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.551 [2024-11-20 08:50:54.379286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.551 [2024-11-20 08:50:54.379343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.126 [2024-11-20 08:50:54.807388] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.126 [2024-11-20 08:50:54.807455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.126 [2024-11-20 08:50:54.807472] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.126 [2024-11-20 08:50:54.807489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.126 [2024-11-20 08:50:54.807499] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.126 [2024-11-20 08:50:54.807513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.126 "name": "Existed_Raid", 00:17:24.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.126 "strip_size_kb": 64, 00:17:24.126 "state": "configuring", 00:17:24.126 "raid_level": "raid5f", 00:17:24.126 "superblock": false, 00:17:24.126 "num_base_bdevs": 3, 00:17:24.126 "num_base_bdevs_discovered": 0, 00:17:24.126 "num_base_bdevs_operational": 3, 00:17:24.126 "base_bdevs_list": [ 00:17:24.126 { 00:17:24.126 "name": "BaseBdev1", 00:17:24.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.126 "is_configured": false, 00:17:24.126 "data_offset": 0, 00:17:24.126 "data_size": 0 00:17:24.126 }, 00:17:24.126 { 00:17:24.126 "name": "BaseBdev2", 00:17:24.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.126 "is_configured": false, 00:17:24.126 "data_offset": 0, 00:17:24.126 "data_size": 0 00:17:24.126 }, 00:17:24.126 { 00:17:24.126 "name": "BaseBdev3", 00:17:24.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.126 "is_configured": false, 00:17:24.126 "data_offset": 0, 00:17:24.126 "data_size": 0 00:17:24.126 } 00:17:24.126 ] 00:17:24.126 }' 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.126 08:50:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.692 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 [2024-11-20 08:50:55.339848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.693 [2024-11-20 08:50:55.339925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 [2024-11-20 08:50:55.347822] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.693 [2024-11-20 08:50:55.347894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.693 [2024-11-20 08:50:55.347910] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.693 [2024-11-20 08:50:55.347926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.693 [2024-11-20 08:50:55.347936] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.693 [2024-11-20 08:50:55.347950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 [2024-11-20 08:50:55.392941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.693 BaseBdev1 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 [ 00:17:24.693 { 00:17:24.693 "name": "BaseBdev1", 00:17:24.693 "aliases": [ 00:17:24.693 "ad33386d-73c3-4f11-8127-39104e4fef1b" 00:17:24.693 ], 00:17:24.693 "product_name": "Malloc disk", 00:17:24.693 "block_size": 512, 00:17:24.693 "num_blocks": 65536, 00:17:24.693 "uuid": "ad33386d-73c3-4f11-8127-39104e4fef1b", 00:17:24.693 "assigned_rate_limits": { 00:17:24.693 "rw_ios_per_sec": 0, 00:17:24.693 "rw_mbytes_per_sec": 0, 00:17:24.693 "r_mbytes_per_sec": 0, 00:17:24.693 "w_mbytes_per_sec": 0 00:17:24.693 }, 00:17:24.693 "claimed": true, 00:17:24.693 "claim_type": "exclusive_write", 00:17:24.693 "zoned": false, 00:17:24.693 "supported_io_types": { 00:17:24.693 "read": true, 00:17:24.693 "write": true, 00:17:24.693 "unmap": true, 00:17:24.693 "flush": true, 00:17:24.693 "reset": true, 00:17:24.693 "nvme_admin": false, 00:17:24.693 "nvme_io": false, 00:17:24.693 "nvme_io_md": false, 00:17:24.693 "write_zeroes": true, 00:17:24.693 "zcopy": true, 00:17:24.693 "get_zone_info": false, 00:17:24.693 "zone_management": false, 00:17:24.693 "zone_append": false, 00:17:24.693 "compare": false, 00:17:24.693 "compare_and_write": false, 00:17:24.693 "abort": true, 00:17:24.693 "seek_hole": false, 00:17:24.693 "seek_data": false, 00:17:24.693 "copy": true, 00:17:24.693 "nvme_iov_md": false 00:17:24.693 }, 00:17:24.693 "memory_domains": [ 00:17:24.693 { 00:17:24.693 "dma_device_id": "system", 00:17:24.693 "dma_device_type": 1 00:17:24.693 }, 00:17:24.693 { 00:17:24.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.693 "dma_device_type": 2 00:17:24.693 } 00:17:24.693 ], 00:17:24.693 "driver_specific": {} 00:17:24.693 } 00:17:24.693 ] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.693 "name": "Existed_Raid", 00:17:24.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.693 "strip_size_kb": 64, 00:17:24.693 "state": "configuring", 00:17:24.693 "raid_level": "raid5f", 00:17:24.693 "superblock": false, 00:17:24.693 "num_base_bdevs": 3, 00:17:24.693 "num_base_bdevs_discovered": 1, 00:17:24.693 "num_base_bdevs_operational": 3, 00:17:24.693 "base_bdevs_list": [ 00:17:24.693 { 00:17:24.693 "name": "BaseBdev1", 00:17:24.693 "uuid": "ad33386d-73c3-4f11-8127-39104e4fef1b", 00:17:24.693 "is_configured": true, 00:17:24.693 "data_offset": 0, 00:17:24.693 "data_size": 65536 00:17:24.693 }, 00:17:24.693 { 00:17:24.693 "name": "BaseBdev2", 00:17:24.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.693 "is_configured": false, 00:17:24.693 "data_offset": 0, 00:17:24.693 "data_size": 0 00:17:24.693 }, 00:17:24.693 { 00:17:24.693 "name": "BaseBdev3", 00:17:24.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.693 "is_configured": false, 00:17:24.693 "data_offset": 0, 00:17:24.693 "data_size": 0 00:17:24.693 } 00:17:24.693 ] 00:17:24.693 }' 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.693 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.260 [2024-11-20 08:50:55.937166] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.260 [2024-11-20 08:50:55.937349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.260 [2024-11-20 08:50:55.945217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.260 [2024-11-20 08:50:55.947561] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.260 [2024-11-20 08:50:55.947612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.260 [2024-11-20 08:50:55.947628] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:25.260 [2024-11-20 08:50:55.947643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.260 08:50:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.260 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.260 "name": "Existed_Raid", 00:17:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.260 "strip_size_kb": 64, 00:17:25.260 "state": "configuring", 00:17:25.260 "raid_level": "raid5f", 00:17:25.260 "superblock": false, 00:17:25.260 "num_base_bdevs": 3, 00:17:25.260 "num_base_bdevs_discovered": 1, 00:17:25.260 "num_base_bdevs_operational": 3, 00:17:25.260 "base_bdevs_list": [ 00:17:25.260 { 00:17:25.260 "name": "BaseBdev1", 00:17:25.260 "uuid": "ad33386d-73c3-4f11-8127-39104e4fef1b", 00:17:25.260 "is_configured": true, 00:17:25.260 "data_offset": 0, 00:17:25.260 "data_size": 65536 00:17:25.260 }, 00:17:25.260 { 00:17:25.260 "name": "BaseBdev2", 00:17:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.260 "is_configured": false, 00:17:25.260 "data_offset": 0, 00:17:25.260 "data_size": 0 00:17:25.260 }, 00:17:25.260 { 00:17:25.260 "name": "BaseBdev3", 00:17:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.260 "is_configured": false, 00:17:25.260 "data_offset": 0, 00:17:25.260 "data_size": 0 00:17:25.260 } 00:17:25.260 ] 00:17:25.260 }' 00:17:25.260 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.260 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.829 [2024-11-20 08:50:56.483422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.829 BaseBdev2 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.829 [ 00:17:25.829 { 00:17:25.829 "name": "BaseBdev2", 00:17:25.829 "aliases": [ 00:17:25.829 "076ad577-1638-4b87-a920-50a46a5e6a33" 00:17:25.829 ], 00:17:25.829 "product_name": "Malloc disk", 00:17:25.829 "block_size": 512, 00:17:25.829 "num_blocks": 65536, 00:17:25.829 "uuid": "076ad577-1638-4b87-a920-50a46a5e6a33", 00:17:25.829 "assigned_rate_limits": { 00:17:25.829 "rw_ios_per_sec": 0, 00:17:25.829 "rw_mbytes_per_sec": 0, 00:17:25.829 "r_mbytes_per_sec": 0, 00:17:25.829 "w_mbytes_per_sec": 0 00:17:25.829 }, 00:17:25.829 "claimed": true, 00:17:25.829 "claim_type": "exclusive_write", 00:17:25.829 "zoned": false, 00:17:25.829 "supported_io_types": { 00:17:25.829 "read": true, 00:17:25.829 "write": true, 00:17:25.829 "unmap": true, 00:17:25.829 "flush": true, 00:17:25.829 "reset": true, 00:17:25.829 "nvme_admin": false, 00:17:25.829 "nvme_io": false, 00:17:25.829 "nvme_io_md": false, 00:17:25.829 "write_zeroes": true, 00:17:25.829 "zcopy": true, 00:17:25.829 "get_zone_info": false, 00:17:25.829 "zone_management": false, 00:17:25.829 "zone_append": false, 00:17:25.829 "compare": false, 00:17:25.829 "compare_and_write": false, 00:17:25.829 "abort": true, 00:17:25.829 "seek_hole": false, 00:17:25.829 "seek_data": false, 00:17:25.829 "copy": true, 00:17:25.829 "nvme_iov_md": false 00:17:25.829 }, 00:17:25.829 "memory_domains": [ 00:17:25.829 { 00:17:25.829 "dma_device_id": "system", 00:17:25.829 "dma_device_type": 1 00:17:25.829 }, 00:17:25.829 { 00:17:25.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.829 "dma_device_type": 2 00:17:25.829 } 00:17:25.829 ], 00:17:25.829 "driver_specific": {} 00:17:25.829 } 00:17:25.829 ] 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:25.829 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.830 "name": "Existed_Raid", 00:17:25.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.830 "strip_size_kb": 64, 00:17:25.830 "state": "configuring", 00:17:25.830 "raid_level": "raid5f", 00:17:25.830 "superblock": false, 00:17:25.830 "num_base_bdevs": 3, 00:17:25.830 "num_base_bdevs_discovered": 2, 00:17:25.830 "num_base_bdevs_operational": 3, 00:17:25.830 "base_bdevs_list": [ 00:17:25.830 { 00:17:25.830 "name": "BaseBdev1", 00:17:25.830 "uuid": "ad33386d-73c3-4f11-8127-39104e4fef1b", 00:17:25.830 "is_configured": true, 00:17:25.830 "data_offset": 0, 00:17:25.830 "data_size": 65536 00:17:25.830 }, 00:17:25.830 { 00:17:25.830 "name": "BaseBdev2", 00:17:25.830 "uuid": "076ad577-1638-4b87-a920-50a46a5e6a33", 00:17:25.830 "is_configured": true, 00:17:25.830 "data_offset": 0, 00:17:25.830 "data_size": 65536 00:17:25.830 }, 00:17:25.830 { 00:17:25.830 "name": "BaseBdev3", 00:17:25.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.830 "is_configured": false, 00:17:25.830 "data_offset": 0, 00:17:25.830 "data_size": 0 00:17:25.830 } 00:17:25.830 ] 00:17:25.830 }' 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.830 08:50:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.399 [2024-11-20 08:50:57.076686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.399 [2024-11-20 08:50:57.076797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:26.399 [2024-11-20 08:50:57.076822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:26.399 [2024-11-20 08:50:57.077202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:26.399 [2024-11-20 08:50:57.082518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:26.399 [2024-11-20 08:50:57.082555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:26.399 [2024-11-20 08:50:57.082962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.399 BaseBdev3 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.399 [ 00:17:26.399 { 00:17:26.399 "name": "BaseBdev3", 00:17:26.399 "aliases": [ 00:17:26.399 "433c0451-aba1-46d0-98e8-38fdc90d6d04" 00:17:26.399 ], 00:17:26.399 "product_name": "Malloc disk", 00:17:26.399 "block_size": 512, 00:17:26.399 "num_blocks": 65536, 00:17:26.399 "uuid": "433c0451-aba1-46d0-98e8-38fdc90d6d04", 00:17:26.399 "assigned_rate_limits": { 00:17:26.399 "rw_ios_per_sec": 0, 00:17:26.399 "rw_mbytes_per_sec": 0, 00:17:26.399 "r_mbytes_per_sec": 0, 00:17:26.399 "w_mbytes_per_sec": 0 00:17:26.399 }, 00:17:26.399 "claimed": true, 00:17:26.399 "claim_type": "exclusive_write", 00:17:26.399 "zoned": false, 00:17:26.399 "supported_io_types": { 00:17:26.399 "read": true, 00:17:26.399 "write": true, 00:17:26.399 "unmap": true, 00:17:26.399 "flush": true, 00:17:26.399 "reset": true, 00:17:26.399 "nvme_admin": false, 00:17:26.399 "nvme_io": false, 00:17:26.399 "nvme_io_md": false, 00:17:26.399 "write_zeroes": true, 00:17:26.399 "zcopy": true, 00:17:26.399 "get_zone_info": false, 00:17:26.399 "zone_management": false, 00:17:26.399 "zone_append": false, 00:17:26.399 "compare": false, 00:17:26.399 "compare_and_write": false, 00:17:26.399 "abort": true, 00:17:26.399 "seek_hole": false, 00:17:26.399 "seek_data": false, 00:17:26.399 "copy": true, 00:17:26.399 "nvme_iov_md": false 00:17:26.399 }, 00:17:26.399 "memory_domains": [ 00:17:26.399 { 00:17:26.399 "dma_device_id": "system", 00:17:26.399 "dma_device_type": 1 00:17:26.399 }, 00:17:26.399 { 00:17:26.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.399 "dma_device_type": 2 00:17:26.399 } 00:17:26.399 ], 00:17:26.399 "driver_specific": {} 00:17:26.399 } 00:17:26.399 ] 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.399 "name": "Existed_Raid", 00:17:26.399 "uuid": "30aeab60-a9ad-47c5-82c1-7e07365527df", 00:17:26.399 "strip_size_kb": 64, 00:17:26.399 "state": "online", 00:17:26.399 "raid_level": "raid5f", 00:17:26.399 "superblock": false, 00:17:26.399 "num_base_bdevs": 3, 00:17:26.399 "num_base_bdevs_discovered": 3, 00:17:26.399 "num_base_bdevs_operational": 3, 00:17:26.399 "base_bdevs_list": [ 00:17:26.399 { 00:17:26.399 "name": "BaseBdev1", 00:17:26.399 "uuid": "ad33386d-73c3-4f11-8127-39104e4fef1b", 00:17:26.399 "is_configured": true, 00:17:26.399 "data_offset": 0, 00:17:26.399 "data_size": 65536 00:17:26.399 }, 00:17:26.399 { 00:17:26.399 "name": "BaseBdev2", 00:17:26.399 "uuid": "076ad577-1638-4b87-a920-50a46a5e6a33", 00:17:26.399 "is_configured": true, 00:17:26.399 "data_offset": 0, 00:17:26.399 "data_size": 65536 00:17:26.399 }, 00:17:26.399 { 00:17:26.399 "name": "BaseBdev3", 00:17:26.399 "uuid": "433c0451-aba1-46d0-98e8-38fdc90d6d04", 00:17:26.399 "is_configured": true, 00:17:26.399 "data_offset": 0, 00:17:26.399 "data_size": 65536 00:17:26.399 } 00:17:26.399 ] 00:17:26.399 }' 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.399 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 [2024-11-20 08:50:57.625002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:26.968 "name": "Existed_Raid", 00:17:26.968 "aliases": [ 00:17:26.968 "30aeab60-a9ad-47c5-82c1-7e07365527df" 00:17:26.968 ], 00:17:26.968 "product_name": "Raid Volume", 00:17:26.968 "block_size": 512, 00:17:26.968 "num_blocks": 131072, 00:17:26.968 "uuid": "30aeab60-a9ad-47c5-82c1-7e07365527df", 00:17:26.968 "assigned_rate_limits": { 00:17:26.968 "rw_ios_per_sec": 0, 00:17:26.968 "rw_mbytes_per_sec": 0, 00:17:26.968 "r_mbytes_per_sec": 0, 00:17:26.968 "w_mbytes_per_sec": 0 00:17:26.968 }, 00:17:26.968 "claimed": false, 00:17:26.968 "zoned": false, 00:17:26.968 "supported_io_types": { 00:17:26.968 "read": true, 00:17:26.968 "write": true, 00:17:26.968 "unmap": false, 00:17:26.968 "flush": false, 00:17:26.968 "reset": true, 00:17:26.968 "nvme_admin": false, 00:17:26.968 "nvme_io": false, 00:17:26.968 "nvme_io_md": false, 00:17:26.968 "write_zeroes": true, 00:17:26.968 "zcopy": false, 00:17:26.968 "get_zone_info": false, 00:17:26.968 "zone_management": false, 00:17:26.968 "zone_append": false, 00:17:26.968 "compare": false, 00:17:26.968 "compare_and_write": false, 00:17:26.968 "abort": false, 00:17:26.968 "seek_hole": false, 00:17:26.968 "seek_data": false, 00:17:26.968 "copy": false, 00:17:26.968 "nvme_iov_md": false 00:17:26.968 }, 00:17:26.968 "driver_specific": { 00:17:26.968 "raid": { 00:17:26.968 "uuid": "30aeab60-a9ad-47c5-82c1-7e07365527df", 00:17:26.968 "strip_size_kb": 64, 00:17:26.968 "state": "online", 00:17:26.968 "raid_level": "raid5f", 00:17:26.968 "superblock": false, 00:17:26.968 "num_base_bdevs": 3, 00:17:26.968 "num_base_bdevs_discovered": 3, 00:17:26.968 "num_base_bdevs_operational": 3, 00:17:26.968 "base_bdevs_list": [ 00:17:26.968 { 00:17:26.968 "name": "BaseBdev1", 00:17:26.968 "uuid": "ad33386d-73c3-4f11-8127-39104e4fef1b", 00:17:26.968 "is_configured": true, 00:17:26.968 "data_offset": 0, 00:17:26.968 "data_size": 65536 00:17:26.968 }, 00:17:26.968 { 00:17:26.968 "name": "BaseBdev2", 00:17:26.968 "uuid": "076ad577-1638-4b87-a920-50a46a5e6a33", 00:17:26.968 "is_configured": true, 00:17:26.968 "data_offset": 0, 00:17:26.968 "data_size": 65536 00:17:26.968 }, 00:17:26.968 { 00:17:26.968 "name": "BaseBdev3", 00:17:26.968 "uuid": "433c0451-aba1-46d0-98e8-38fdc90d6d04", 00:17:26.968 "is_configured": true, 00:17:26.968 "data_offset": 0, 00:17:26.968 "data_size": 65536 00:17:26.968 } 00:17:26.968 ] 00:17:26.968 } 00:17:26.968 } 00:17:26.968 }' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:26.968 BaseBdev2 00:17:26.968 BaseBdev3' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.969 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.228 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.228 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.228 08:50:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:27.228 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.228 08:50:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.228 [2024-11-20 08:50:57.924961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.228 "name": "Existed_Raid", 00:17:27.228 "uuid": "30aeab60-a9ad-47c5-82c1-7e07365527df", 00:17:27.228 "strip_size_kb": 64, 00:17:27.228 "state": "online", 00:17:27.228 "raid_level": "raid5f", 00:17:27.228 "superblock": false, 00:17:27.228 "num_base_bdevs": 3, 00:17:27.228 "num_base_bdevs_discovered": 2, 00:17:27.228 "num_base_bdevs_operational": 2, 00:17:27.228 "base_bdevs_list": [ 00:17:27.228 { 00:17:27.228 "name": null, 00:17:27.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.228 "is_configured": false, 00:17:27.228 "data_offset": 0, 00:17:27.228 "data_size": 65536 00:17:27.228 }, 00:17:27.228 { 00:17:27.228 "name": "BaseBdev2", 00:17:27.228 "uuid": "076ad577-1638-4b87-a920-50a46a5e6a33", 00:17:27.228 "is_configured": true, 00:17:27.228 "data_offset": 0, 00:17:27.228 "data_size": 65536 00:17:27.228 }, 00:17:27.228 { 00:17:27.228 "name": "BaseBdev3", 00:17:27.228 "uuid": "433c0451-aba1-46d0-98e8-38fdc90d6d04", 00:17:27.228 "is_configured": true, 00:17:27.228 "data_offset": 0, 00:17:27.228 "data_size": 65536 00:17:27.228 } 00:17:27.228 ] 00:17:27.228 }' 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.228 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.797 [2024-11-20 08:50:58.606997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.797 [2024-11-20 08:50:58.607318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.797 [2024-11-20 08:50:58.695392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:27.797 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.056 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.056 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.056 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 [2024-11-20 08:50:58.755486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.057 [2024-11-20 08:50:58.755551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 BaseBdev2 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.057 [ 00:17:28.057 { 00:17:28.057 "name": "BaseBdev2", 00:17:28.057 "aliases": [ 00:17:28.057 "c499a574-f78f-4aa1-9a1c-fb1732901cfa" 00:17:28.057 ], 00:17:28.057 "product_name": "Malloc disk", 00:17:28.057 "block_size": 512, 00:17:28.057 "num_blocks": 65536, 00:17:28.057 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:28.057 "assigned_rate_limits": { 00:17:28.057 "rw_ios_per_sec": 0, 00:17:28.057 "rw_mbytes_per_sec": 0, 00:17:28.057 "r_mbytes_per_sec": 0, 00:17:28.057 "w_mbytes_per_sec": 0 00:17:28.057 }, 00:17:28.057 "claimed": false, 00:17:28.057 "zoned": false, 00:17:28.057 "supported_io_types": { 00:17:28.057 "read": true, 00:17:28.057 "write": true, 00:17:28.057 "unmap": true, 00:17:28.057 "flush": true, 00:17:28.057 "reset": true, 00:17:28.057 "nvme_admin": false, 00:17:28.057 "nvme_io": false, 00:17:28.057 "nvme_io_md": false, 00:17:28.057 "write_zeroes": true, 00:17:28.057 "zcopy": true, 00:17:28.057 "get_zone_info": false, 00:17:28.057 "zone_management": false, 00:17:28.057 "zone_append": false, 00:17:28.057 "compare": false, 00:17:28.057 "compare_and_write": false, 00:17:28.057 "abort": true, 00:17:28.057 "seek_hole": false, 00:17:28.057 "seek_data": false, 00:17:28.057 "copy": true, 00:17:28.057 "nvme_iov_md": false 00:17:28.057 }, 00:17:28.057 "memory_domains": [ 00:17:28.057 { 00:17:28.057 "dma_device_id": "system", 00:17:28.057 "dma_device_type": 1 00:17:28.057 }, 00:17:28.057 { 00:17:28.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.057 "dma_device_type": 2 00:17:28.057 } 00:17:28.057 ], 00:17:28.057 "driver_specific": {} 00:17:28.057 } 00:17:28.057 ] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.057 08:50:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 BaseBdev3 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [ 00:17:28.317 { 00:17:28.317 "name": "BaseBdev3", 00:17:28.317 "aliases": [ 00:17:28.317 "58a4575a-3cf9-4f42-869e-6e87e16f4b0f" 00:17:28.317 ], 00:17:28.317 "product_name": "Malloc disk", 00:17:28.317 "block_size": 512, 00:17:28.317 "num_blocks": 65536, 00:17:28.317 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:28.317 "assigned_rate_limits": { 00:17:28.317 "rw_ios_per_sec": 0, 00:17:28.317 "rw_mbytes_per_sec": 0, 00:17:28.317 "r_mbytes_per_sec": 0, 00:17:28.317 "w_mbytes_per_sec": 0 00:17:28.317 }, 00:17:28.317 "claimed": false, 00:17:28.317 "zoned": false, 00:17:28.317 "supported_io_types": { 00:17:28.317 "read": true, 00:17:28.317 "write": true, 00:17:28.317 "unmap": true, 00:17:28.317 "flush": true, 00:17:28.317 "reset": true, 00:17:28.317 "nvme_admin": false, 00:17:28.317 "nvme_io": false, 00:17:28.317 "nvme_io_md": false, 00:17:28.317 "write_zeroes": true, 00:17:28.317 "zcopy": true, 00:17:28.317 "get_zone_info": false, 00:17:28.317 "zone_management": false, 00:17:28.317 "zone_append": false, 00:17:28.317 "compare": false, 00:17:28.317 "compare_and_write": false, 00:17:28.317 "abort": true, 00:17:28.317 "seek_hole": false, 00:17:28.317 "seek_data": false, 00:17:28.317 "copy": true, 00:17:28.317 "nvme_iov_md": false 00:17:28.317 }, 00:17:28.317 "memory_domains": [ 00:17:28.317 { 00:17:28.317 "dma_device_id": "system", 00:17:28.317 "dma_device_type": 1 00:17:28.317 }, 00:17:28.317 { 00:17:28.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.317 "dma_device_type": 2 00:17:28.317 } 00:17:28.317 ], 00:17:28.317 "driver_specific": {} 00:17:28.317 } 00:17:28.317 ] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 [2024-11-20 08:50:59.048490] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.317 [2024-11-20 08:50:59.048682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.317 [2024-11-20 08:50:59.048728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.317 [2024-11-20 08:50:59.051078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.317 "name": "Existed_Raid", 00:17:28.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.317 "strip_size_kb": 64, 00:17:28.317 "state": "configuring", 00:17:28.317 "raid_level": "raid5f", 00:17:28.317 "superblock": false, 00:17:28.317 "num_base_bdevs": 3, 00:17:28.317 "num_base_bdevs_discovered": 2, 00:17:28.317 "num_base_bdevs_operational": 3, 00:17:28.317 "base_bdevs_list": [ 00:17:28.317 { 00:17:28.317 "name": "BaseBdev1", 00:17:28.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.317 "is_configured": false, 00:17:28.317 "data_offset": 0, 00:17:28.317 "data_size": 0 00:17:28.317 }, 00:17:28.317 { 00:17:28.317 "name": "BaseBdev2", 00:17:28.317 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:28.317 "is_configured": true, 00:17:28.317 "data_offset": 0, 00:17:28.317 "data_size": 65536 00:17:28.317 }, 00:17:28.317 { 00:17:28.317 "name": "BaseBdev3", 00:17:28.317 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:28.317 "is_configured": true, 00:17:28.317 "data_offset": 0, 00:17:28.317 "data_size": 65536 00:17:28.317 } 00:17:28.317 ] 00:17:28.317 }' 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.317 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.884 [2024-11-20 08:50:59.556635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.884 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.885 "name": "Existed_Raid", 00:17:28.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.885 "strip_size_kb": 64, 00:17:28.885 "state": "configuring", 00:17:28.885 "raid_level": "raid5f", 00:17:28.885 "superblock": false, 00:17:28.885 "num_base_bdevs": 3, 00:17:28.885 "num_base_bdevs_discovered": 1, 00:17:28.885 "num_base_bdevs_operational": 3, 00:17:28.885 "base_bdevs_list": [ 00:17:28.885 { 00:17:28.885 "name": "BaseBdev1", 00:17:28.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.885 "is_configured": false, 00:17:28.885 "data_offset": 0, 00:17:28.885 "data_size": 0 00:17:28.885 }, 00:17:28.885 { 00:17:28.885 "name": null, 00:17:28.885 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:28.885 "is_configured": false, 00:17:28.885 "data_offset": 0, 00:17:28.885 "data_size": 65536 00:17:28.885 }, 00:17:28.885 { 00:17:28.885 "name": "BaseBdev3", 00:17:28.885 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:28.885 "is_configured": true, 00:17:28.885 "data_offset": 0, 00:17:28.885 "data_size": 65536 00:17:28.885 } 00:17:28.885 ] 00:17:28.885 }' 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.885 08:50:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.144 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.144 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.144 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:29.144 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.403 [2024-11-20 08:51:00.130679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.403 BaseBdev1 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.403 [ 00:17:29.403 { 00:17:29.403 "name": "BaseBdev1", 00:17:29.403 "aliases": [ 00:17:29.403 "013ec1d2-ab17-42cd-8f08-e190f77e6b8c" 00:17:29.403 ], 00:17:29.403 "product_name": "Malloc disk", 00:17:29.403 "block_size": 512, 00:17:29.403 "num_blocks": 65536, 00:17:29.403 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:29.403 "assigned_rate_limits": { 00:17:29.403 "rw_ios_per_sec": 0, 00:17:29.403 "rw_mbytes_per_sec": 0, 00:17:29.403 "r_mbytes_per_sec": 0, 00:17:29.403 "w_mbytes_per_sec": 0 00:17:29.403 }, 00:17:29.403 "claimed": true, 00:17:29.403 "claim_type": "exclusive_write", 00:17:29.403 "zoned": false, 00:17:29.403 "supported_io_types": { 00:17:29.403 "read": true, 00:17:29.403 "write": true, 00:17:29.403 "unmap": true, 00:17:29.403 "flush": true, 00:17:29.403 "reset": true, 00:17:29.403 "nvme_admin": false, 00:17:29.403 "nvme_io": false, 00:17:29.403 "nvme_io_md": false, 00:17:29.403 "write_zeroes": true, 00:17:29.403 "zcopy": true, 00:17:29.403 "get_zone_info": false, 00:17:29.403 "zone_management": false, 00:17:29.403 "zone_append": false, 00:17:29.403 "compare": false, 00:17:29.403 "compare_and_write": false, 00:17:29.403 "abort": true, 00:17:29.403 "seek_hole": false, 00:17:29.403 "seek_data": false, 00:17:29.403 "copy": true, 00:17:29.403 "nvme_iov_md": false 00:17:29.403 }, 00:17:29.403 "memory_domains": [ 00:17:29.403 { 00:17:29.403 "dma_device_id": "system", 00:17:29.403 "dma_device_type": 1 00:17:29.403 }, 00:17:29.403 { 00:17:29.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.403 "dma_device_type": 2 00:17:29.403 } 00:17:29.403 ], 00:17:29.403 "driver_specific": {} 00:17:29.403 } 00:17:29.403 ] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.403 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.403 "name": "Existed_Raid", 00:17:29.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.403 "strip_size_kb": 64, 00:17:29.403 "state": "configuring", 00:17:29.403 "raid_level": "raid5f", 00:17:29.403 "superblock": false, 00:17:29.403 "num_base_bdevs": 3, 00:17:29.403 "num_base_bdevs_discovered": 2, 00:17:29.403 "num_base_bdevs_operational": 3, 00:17:29.403 "base_bdevs_list": [ 00:17:29.403 { 00:17:29.403 "name": "BaseBdev1", 00:17:29.403 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:29.403 "is_configured": true, 00:17:29.403 "data_offset": 0, 00:17:29.403 "data_size": 65536 00:17:29.403 }, 00:17:29.403 { 00:17:29.403 "name": null, 00:17:29.403 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:29.403 "is_configured": false, 00:17:29.403 "data_offset": 0, 00:17:29.403 "data_size": 65536 00:17:29.403 }, 00:17:29.403 { 00:17:29.403 "name": "BaseBdev3", 00:17:29.404 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:29.404 "is_configured": true, 00:17:29.404 "data_offset": 0, 00:17:29.404 "data_size": 65536 00:17:29.404 } 00:17:29.404 ] 00:17:29.404 }' 00:17:29.404 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.404 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.973 [2024-11-20 08:51:00.762891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.973 "name": "Existed_Raid", 00:17:29.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.973 "strip_size_kb": 64, 00:17:29.973 "state": "configuring", 00:17:29.973 "raid_level": "raid5f", 00:17:29.973 "superblock": false, 00:17:29.973 "num_base_bdevs": 3, 00:17:29.973 "num_base_bdevs_discovered": 1, 00:17:29.973 "num_base_bdevs_operational": 3, 00:17:29.973 "base_bdevs_list": [ 00:17:29.973 { 00:17:29.973 "name": "BaseBdev1", 00:17:29.973 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:29.973 "is_configured": true, 00:17:29.973 "data_offset": 0, 00:17:29.973 "data_size": 65536 00:17:29.973 }, 00:17:29.973 { 00:17:29.973 "name": null, 00:17:29.973 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:29.973 "is_configured": false, 00:17:29.973 "data_offset": 0, 00:17:29.973 "data_size": 65536 00:17:29.973 }, 00:17:29.973 { 00:17:29.973 "name": null, 00:17:29.973 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:29.973 "is_configured": false, 00:17:29.973 "data_offset": 0, 00:17:29.973 "data_size": 65536 00:17:29.973 } 00:17:29.973 ] 00:17:29.973 }' 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.973 08:51:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.541 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.542 [2024-11-20 08:51:01.331103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.542 "name": "Existed_Raid", 00:17:30.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.542 "strip_size_kb": 64, 00:17:30.542 "state": "configuring", 00:17:30.542 "raid_level": "raid5f", 00:17:30.542 "superblock": false, 00:17:30.542 "num_base_bdevs": 3, 00:17:30.542 "num_base_bdevs_discovered": 2, 00:17:30.542 "num_base_bdevs_operational": 3, 00:17:30.542 "base_bdevs_list": [ 00:17:30.542 { 00:17:30.542 "name": "BaseBdev1", 00:17:30.542 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:30.542 "is_configured": true, 00:17:30.542 "data_offset": 0, 00:17:30.542 "data_size": 65536 00:17:30.542 }, 00:17:30.542 { 00:17:30.542 "name": null, 00:17:30.542 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:30.542 "is_configured": false, 00:17:30.542 "data_offset": 0, 00:17:30.542 "data_size": 65536 00:17:30.542 }, 00:17:30.542 { 00:17:30.542 "name": "BaseBdev3", 00:17:30.542 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:30.542 "is_configured": true, 00:17:30.542 "data_offset": 0, 00:17:30.542 "data_size": 65536 00:17:30.542 } 00:17:30.542 ] 00:17:30.542 }' 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.542 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 [2024-11-20 08:51:01.907279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.109 08:51:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.109 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.369 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.369 "name": "Existed_Raid", 00:17:31.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.369 "strip_size_kb": 64, 00:17:31.369 "state": "configuring", 00:17:31.369 "raid_level": "raid5f", 00:17:31.369 "superblock": false, 00:17:31.369 "num_base_bdevs": 3, 00:17:31.369 "num_base_bdevs_discovered": 1, 00:17:31.369 "num_base_bdevs_operational": 3, 00:17:31.369 "base_bdevs_list": [ 00:17:31.369 { 00:17:31.369 "name": null, 00:17:31.369 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:31.369 "is_configured": false, 00:17:31.369 "data_offset": 0, 00:17:31.369 "data_size": 65536 00:17:31.369 }, 00:17:31.369 { 00:17:31.369 "name": null, 00:17:31.369 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:31.369 "is_configured": false, 00:17:31.369 "data_offset": 0, 00:17:31.369 "data_size": 65536 00:17:31.369 }, 00:17:31.369 { 00:17:31.369 "name": "BaseBdev3", 00:17:31.369 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:31.369 "is_configured": true, 00:17:31.369 "data_offset": 0, 00:17:31.369 "data_size": 65536 00:17:31.369 } 00:17:31.369 ] 00:17:31.369 }' 00:17:31.369 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.369 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.628 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.628 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.628 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.628 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.628 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.887 [2024-11-20 08:51:02.563283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.887 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.887 "name": "Existed_Raid", 00:17:31.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.887 "strip_size_kb": 64, 00:17:31.887 "state": "configuring", 00:17:31.887 "raid_level": "raid5f", 00:17:31.887 "superblock": false, 00:17:31.887 "num_base_bdevs": 3, 00:17:31.887 "num_base_bdevs_discovered": 2, 00:17:31.887 "num_base_bdevs_operational": 3, 00:17:31.887 "base_bdevs_list": [ 00:17:31.887 { 00:17:31.887 "name": null, 00:17:31.887 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:31.887 "is_configured": false, 00:17:31.887 "data_offset": 0, 00:17:31.887 "data_size": 65536 00:17:31.887 }, 00:17:31.887 { 00:17:31.887 "name": "BaseBdev2", 00:17:31.887 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:31.887 "is_configured": true, 00:17:31.887 "data_offset": 0, 00:17:31.887 "data_size": 65536 00:17:31.887 }, 00:17:31.887 { 00:17:31.887 "name": "BaseBdev3", 00:17:31.887 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:31.887 "is_configured": true, 00:17:31.887 "data_offset": 0, 00:17:31.888 "data_size": 65536 00:17:31.888 } 00:17:31.888 ] 00:17:31.888 }' 00:17:31.888 08:51:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.888 08:51:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 013ec1d2-ab17-42cd-8f08-e190f77e6b8c 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 [2024-11-20 08:51:03.245856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:32.459 [2024-11-20 08:51:03.245911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:32.459 [2024-11-20 08:51:03.245926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:32.459 [2024-11-20 08:51:03.246474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:32.459 [2024-11-20 08:51:03.251459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:32.459 [2024-11-20 08:51:03.251622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:32.459 [2024-11-20 08:51:03.251955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.459 NewBaseBdev 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.459 [ 00:17:32.459 { 00:17:32.459 "name": "NewBaseBdev", 00:17:32.459 "aliases": [ 00:17:32.459 "013ec1d2-ab17-42cd-8f08-e190f77e6b8c" 00:17:32.459 ], 00:17:32.459 "product_name": "Malloc disk", 00:17:32.459 "block_size": 512, 00:17:32.459 "num_blocks": 65536, 00:17:32.459 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:32.459 "assigned_rate_limits": { 00:17:32.459 "rw_ios_per_sec": 0, 00:17:32.459 "rw_mbytes_per_sec": 0, 00:17:32.459 "r_mbytes_per_sec": 0, 00:17:32.459 "w_mbytes_per_sec": 0 00:17:32.459 }, 00:17:32.459 "claimed": true, 00:17:32.459 "claim_type": "exclusive_write", 00:17:32.459 "zoned": false, 00:17:32.459 "supported_io_types": { 00:17:32.459 "read": true, 00:17:32.459 "write": true, 00:17:32.459 "unmap": true, 00:17:32.459 "flush": true, 00:17:32.459 "reset": true, 00:17:32.459 "nvme_admin": false, 00:17:32.459 "nvme_io": false, 00:17:32.459 "nvme_io_md": false, 00:17:32.459 "write_zeroes": true, 00:17:32.459 "zcopy": true, 00:17:32.459 "get_zone_info": false, 00:17:32.459 "zone_management": false, 00:17:32.459 "zone_append": false, 00:17:32.459 "compare": false, 00:17:32.459 "compare_and_write": false, 00:17:32.459 "abort": true, 00:17:32.459 "seek_hole": false, 00:17:32.459 "seek_data": false, 00:17:32.459 "copy": true, 00:17:32.459 "nvme_iov_md": false 00:17:32.459 }, 00:17:32.459 "memory_domains": [ 00:17:32.459 { 00:17:32.459 "dma_device_id": "system", 00:17:32.459 "dma_device_type": 1 00:17:32.459 }, 00:17:32.459 { 00:17:32.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.459 "dma_device_type": 2 00:17:32.459 } 00:17:32.459 ], 00:17:32.459 "driver_specific": {} 00:17:32.459 } 00:17:32.459 ] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.459 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.460 "name": "Existed_Raid", 00:17:32.460 "uuid": "f043fa90-44d9-4298-aaf0-efd268fdeeaf", 00:17:32.460 "strip_size_kb": 64, 00:17:32.460 "state": "online", 00:17:32.460 "raid_level": "raid5f", 00:17:32.460 "superblock": false, 00:17:32.460 "num_base_bdevs": 3, 00:17:32.460 "num_base_bdevs_discovered": 3, 00:17:32.460 "num_base_bdevs_operational": 3, 00:17:32.460 "base_bdevs_list": [ 00:17:32.460 { 00:17:32.460 "name": "NewBaseBdev", 00:17:32.460 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:32.460 "is_configured": true, 00:17:32.460 "data_offset": 0, 00:17:32.460 "data_size": 65536 00:17:32.460 }, 00:17:32.460 { 00:17:32.460 "name": "BaseBdev2", 00:17:32.460 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:32.460 "is_configured": true, 00:17:32.460 "data_offset": 0, 00:17:32.460 "data_size": 65536 00:17:32.460 }, 00:17:32.460 { 00:17:32.460 "name": "BaseBdev3", 00:17:32.460 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:32.460 "is_configured": true, 00:17:32.460 "data_offset": 0, 00:17:32.460 "data_size": 65536 00:17:32.460 } 00:17:32.460 ] 00:17:32.460 }' 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.460 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.036 [2024-11-20 08:51:03.805878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:33.036 "name": "Existed_Raid", 00:17:33.036 "aliases": [ 00:17:33.036 "f043fa90-44d9-4298-aaf0-efd268fdeeaf" 00:17:33.036 ], 00:17:33.036 "product_name": "Raid Volume", 00:17:33.036 "block_size": 512, 00:17:33.036 "num_blocks": 131072, 00:17:33.036 "uuid": "f043fa90-44d9-4298-aaf0-efd268fdeeaf", 00:17:33.036 "assigned_rate_limits": { 00:17:33.036 "rw_ios_per_sec": 0, 00:17:33.036 "rw_mbytes_per_sec": 0, 00:17:33.036 "r_mbytes_per_sec": 0, 00:17:33.036 "w_mbytes_per_sec": 0 00:17:33.036 }, 00:17:33.036 "claimed": false, 00:17:33.036 "zoned": false, 00:17:33.036 "supported_io_types": { 00:17:33.036 "read": true, 00:17:33.036 "write": true, 00:17:33.036 "unmap": false, 00:17:33.036 "flush": false, 00:17:33.036 "reset": true, 00:17:33.036 "nvme_admin": false, 00:17:33.036 "nvme_io": false, 00:17:33.036 "nvme_io_md": false, 00:17:33.036 "write_zeroes": true, 00:17:33.036 "zcopy": false, 00:17:33.036 "get_zone_info": false, 00:17:33.036 "zone_management": false, 00:17:33.036 "zone_append": false, 00:17:33.036 "compare": false, 00:17:33.036 "compare_and_write": false, 00:17:33.036 "abort": false, 00:17:33.036 "seek_hole": false, 00:17:33.036 "seek_data": false, 00:17:33.036 "copy": false, 00:17:33.036 "nvme_iov_md": false 00:17:33.036 }, 00:17:33.036 "driver_specific": { 00:17:33.036 "raid": { 00:17:33.036 "uuid": "f043fa90-44d9-4298-aaf0-efd268fdeeaf", 00:17:33.036 "strip_size_kb": 64, 00:17:33.036 "state": "online", 00:17:33.036 "raid_level": "raid5f", 00:17:33.036 "superblock": false, 00:17:33.036 "num_base_bdevs": 3, 00:17:33.036 "num_base_bdevs_discovered": 3, 00:17:33.036 "num_base_bdevs_operational": 3, 00:17:33.036 "base_bdevs_list": [ 00:17:33.036 { 00:17:33.036 "name": "NewBaseBdev", 00:17:33.036 "uuid": "013ec1d2-ab17-42cd-8f08-e190f77e6b8c", 00:17:33.036 "is_configured": true, 00:17:33.036 "data_offset": 0, 00:17:33.036 "data_size": 65536 00:17:33.036 }, 00:17:33.036 { 00:17:33.036 "name": "BaseBdev2", 00:17:33.036 "uuid": "c499a574-f78f-4aa1-9a1c-fb1732901cfa", 00:17:33.036 "is_configured": true, 00:17:33.036 "data_offset": 0, 00:17:33.036 "data_size": 65536 00:17:33.036 }, 00:17:33.036 { 00:17:33.036 "name": "BaseBdev3", 00:17:33.036 "uuid": "58a4575a-3cf9-4f42-869e-6e87e16f4b0f", 00:17:33.036 "is_configured": true, 00:17:33.036 "data_offset": 0, 00:17:33.036 "data_size": 65536 00:17:33.036 } 00:17:33.036 ] 00:17:33.036 } 00:17:33.036 } 00:17:33.036 }' 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:33.036 BaseBdev2 00:17:33.036 BaseBdev3' 00:17:33.036 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.295 08:51:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.295 [2024-11-20 08:51:04.105708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.295 [2024-11-20 08:51:04.105739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.295 [2024-11-20 08:51:04.105820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.295 [2024-11-20 08:51:04.106153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.295 [2024-11-20 08:51:04.106208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80242 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80242 ']' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80242 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80242 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80242' 00:17:33.295 killing process with pid 80242 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80242 00:17:33.295 [2024-11-20 08:51:04.149442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.295 08:51:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80242 00:17:33.554 [2024-11-20 08:51:04.408940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.933 08:51:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:34.933 00:17:34.933 real 0m11.738s 00:17:34.933 user 0m19.484s 00:17:34.933 sys 0m1.642s 00:17:34.933 08:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.933 08:51:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.933 ************************************ 00:17:34.933 END TEST raid5f_state_function_test 00:17:34.933 ************************************ 00:17:34.934 08:51:05 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:34.934 08:51:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:34.934 08:51:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.934 08:51:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.934 ************************************ 00:17:34.934 START TEST raid5f_state_function_test_sb 00:17:34.934 ************************************ 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80870 00:17:34.934 Process raid pid: 80870 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80870' 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80870 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80870 ']' 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.934 08:51:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.934 [2024-11-20 08:51:05.606984] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:34.934 [2024-11-20 08:51:05.607184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.934 [2024-11-20 08:51:05.789858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.193 [2024-11-20 08:51:05.919991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.453 [2024-11-20 08:51:06.129181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.453 [2024-11-20 08:51:06.129241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.713 [2024-11-20 08:51:06.609842] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.713 [2024-11-20 08:51:06.609915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.713 [2024-11-20 08:51:06.609937] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.713 [2024-11-20 08:51:06.609954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.713 [2024-11-20 08:51:06.609970] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.713 [2024-11-20 08:51:06.609985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.713 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.971 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.971 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.971 "name": "Existed_Raid", 00:17:35.971 "uuid": "85d56cda-7009-43cf-aae7-682cdbb9f50d", 00:17:35.971 "strip_size_kb": 64, 00:17:35.971 "state": "configuring", 00:17:35.971 "raid_level": "raid5f", 00:17:35.971 "superblock": true, 00:17:35.971 "num_base_bdevs": 3, 00:17:35.971 "num_base_bdevs_discovered": 0, 00:17:35.971 "num_base_bdevs_operational": 3, 00:17:35.971 "base_bdevs_list": [ 00:17:35.971 { 00:17:35.971 "name": "BaseBdev1", 00:17:35.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.971 "is_configured": false, 00:17:35.971 "data_offset": 0, 00:17:35.971 "data_size": 0 00:17:35.971 }, 00:17:35.971 { 00:17:35.971 "name": "BaseBdev2", 00:17:35.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.971 "is_configured": false, 00:17:35.971 "data_offset": 0, 00:17:35.971 "data_size": 0 00:17:35.971 }, 00:17:35.971 { 00:17:35.971 "name": "BaseBdev3", 00:17:35.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.971 "is_configured": false, 00:17:35.971 "data_offset": 0, 00:17:35.971 "data_size": 0 00:17:35.971 } 00:17:35.971 ] 00:17:35.971 }' 00:17:35.971 08:51:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.971 08:51:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.229 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:36.229 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.229 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.229 [2024-11-20 08:51:07.105907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.229 [2024-11-20 08:51:07.105973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:36.229 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.230 [2024-11-20 08:51:07.113898] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.230 [2024-11-20 08:51:07.113960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.230 [2024-11-20 08:51:07.113976] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.230 [2024-11-20 08:51:07.113992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.230 [2024-11-20 08:51:07.114002] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.230 [2024-11-20 08:51:07.114016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.230 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.488 [2024-11-20 08:51:07.159484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.488 BaseBdev1 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:36.488 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 [ 00:17:36.489 { 00:17:36.489 "name": "BaseBdev1", 00:17:36.489 "aliases": [ 00:17:36.489 "bf514877-cd73-4b5a-b2b6-e8e92d61a553" 00:17:36.489 ], 00:17:36.489 "product_name": "Malloc disk", 00:17:36.489 "block_size": 512, 00:17:36.489 "num_blocks": 65536, 00:17:36.489 "uuid": "bf514877-cd73-4b5a-b2b6-e8e92d61a553", 00:17:36.489 "assigned_rate_limits": { 00:17:36.489 "rw_ios_per_sec": 0, 00:17:36.489 "rw_mbytes_per_sec": 0, 00:17:36.489 "r_mbytes_per_sec": 0, 00:17:36.489 "w_mbytes_per_sec": 0 00:17:36.489 }, 00:17:36.489 "claimed": true, 00:17:36.489 "claim_type": "exclusive_write", 00:17:36.489 "zoned": false, 00:17:36.489 "supported_io_types": { 00:17:36.489 "read": true, 00:17:36.489 "write": true, 00:17:36.489 "unmap": true, 00:17:36.489 "flush": true, 00:17:36.489 "reset": true, 00:17:36.489 "nvme_admin": false, 00:17:36.489 "nvme_io": false, 00:17:36.489 "nvme_io_md": false, 00:17:36.489 "write_zeroes": true, 00:17:36.489 "zcopy": true, 00:17:36.489 "get_zone_info": false, 00:17:36.489 "zone_management": false, 00:17:36.489 "zone_append": false, 00:17:36.489 "compare": false, 00:17:36.489 "compare_and_write": false, 00:17:36.489 "abort": true, 00:17:36.489 "seek_hole": false, 00:17:36.489 "seek_data": false, 00:17:36.489 "copy": true, 00:17:36.489 "nvme_iov_md": false 00:17:36.489 }, 00:17:36.489 "memory_domains": [ 00:17:36.489 { 00:17:36.489 "dma_device_id": "system", 00:17:36.489 "dma_device_type": 1 00:17:36.489 }, 00:17:36.489 { 00:17:36.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.489 "dma_device_type": 2 00:17:36.489 } 00:17:36.489 ], 00:17:36.489 "driver_specific": {} 00:17:36.489 } 00:17:36.489 ] 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.489 "name": "Existed_Raid", 00:17:36.489 "uuid": "b38f4b3a-8bdc-463f-9e47-87f7605eb617", 00:17:36.489 "strip_size_kb": 64, 00:17:36.489 "state": "configuring", 00:17:36.489 "raid_level": "raid5f", 00:17:36.489 "superblock": true, 00:17:36.489 "num_base_bdevs": 3, 00:17:36.489 "num_base_bdevs_discovered": 1, 00:17:36.489 "num_base_bdevs_operational": 3, 00:17:36.489 "base_bdevs_list": [ 00:17:36.489 { 00:17:36.489 "name": "BaseBdev1", 00:17:36.489 "uuid": "bf514877-cd73-4b5a-b2b6-e8e92d61a553", 00:17:36.489 "is_configured": true, 00:17:36.489 "data_offset": 2048, 00:17:36.489 "data_size": 63488 00:17:36.489 }, 00:17:36.489 { 00:17:36.489 "name": "BaseBdev2", 00:17:36.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.489 "is_configured": false, 00:17:36.489 "data_offset": 0, 00:17:36.489 "data_size": 0 00:17:36.489 }, 00:17:36.489 { 00:17:36.489 "name": "BaseBdev3", 00:17:36.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.489 "is_configured": false, 00:17:36.489 "data_offset": 0, 00:17:36.489 "data_size": 0 00:17:36.489 } 00:17:36.489 ] 00:17:36.489 }' 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.489 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.056 [2024-11-20 08:51:07.695824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.056 [2024-11-20 08:51:07.695894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.056 [2024-11-20 08:51:07.703878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.056 [2024-11-20 08:51:07.706348] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.056 [2024-11-20 08:51:07.706413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.056 [2024-11-20 08:51:07.706430] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.056 [2024-11-20 08:51:07.706446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.056 "name": "Existed_Raid", 00:17:37.056 "uuid": "b787e09b-af1c-4924-80f1-6b07ed5cde05", 00:17:37.056 "strip_size_kb": 64, 00:17:37.056 "state": "configuring", 00:17:37.056 "raid_level": "raid5f", 00:17:37.056 "superblock": true, 00:17:37.056 "num_base_bdevs": 3, 00:17:37.056 "num_base_bdevs_discovered": 1, 00:17:37.056 "num_base_bdevs_operational": 3, 00:17:37.056 "base_bdevs_list": [ 00:17:37.056 { 00:17:37.056 "name": "BaseBdev1", 00:17:37.056 "uuid": "bf514877-cd73-4b5a-b2b6-e8e92d61a553", 00:17:37.056 "is_configured": true, 00:17:37.056 "data_offset": 2048, 00:17:37.056 "data_size": 63488 00:17:37.056 }, 00:17:37.056 { 00:17:37.056 "name": "BaseBdev2", 00:17:37.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.056 "is_configured": false, 00:17:37.056 "data_offset": 0, 00:17:37.056 "data_size": 0 00:17:37.056 }, 00:17:37.056 { 00:17:37.056 "name": "BaseBdev3", 00:17:37.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.056 "is_configured": false, 00:17:37.056 "data_offset": 0, 00:17:37.056 "data_size": 0 00:17:37.056 } 00:17:37.056 ] 00:17:37.056 }' 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.056 08:51:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.315 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:37.315 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.315 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.573 [2024-11-20 08:51:08.262468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.573 BaseBdev2 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.573 [ 00:17:37.573 { 00:17:37.573 "name": "BaseBdev2", 00:17:37.573 "aliases": [ 00:17:37.573 "3fef8865-fb60-4fd0-8fd9-d54d805ff8d5" 00:17:37.573 ], 00:17:37.573 "product_name": "Malloc disk", 00:17:37.573 "block_size": 512, 00:17:37.573 "num_blocks": 65536, 00:17:37.573 "uuid": "3fef8865-fb60-4fd0-8fd9-d54d805ff8d5", 00:17:37.573 "assigned_rate_limits": { 00:17:37.573 "rw_ios_per_sec": 0, 00:17:37.573 "rw_mbytes_per_sec": 0, 00:17:37.573 "r_mbytes_per_sec": 0, 00:17:37.573 "w_mbytes_per_sec": 0 00:17:37.573 }, 00:17:37.573 "claimed": true, 00:17:37.573 "claim_type": "exclusive_write", 00:17:37.573 "zoned": false, 00:17:37.573 "supported_io_types": { 00:17:37.573 "read": true, 00:17:37.573 "write": true, 00:17:37.573 "unmap": true, 00:17:37.573 "flush": true, 00:17:37.573 "reset": true, 00:17:37.573 "nvme_admin": false, 00:17:37.573 "nvme_io": false, 00:17:37.573 "nvme_io_md": false, 00:17:37.573 "write_zeroes": true, 00:17:37.573 "zcopy": true, 00:17:37.573 "get_zone_info": false, 00:17:37.573 "zone_management": false, 00:17:37.573 "zone_append": false, 00:17:37.573 "compare": false, 00:17:37.573 "compare_and_write": false, 00:17:37.573 "abort": true, 00:17:37.573 "seek_hole": false, 00:17:37.573 "seek_data": false, 00:17:37.573 "copy": true, 00:17:37.573 "nvme_iov_md": false 00:17:37.573 }, 00:17:37.573 "memory_domains": [ 00:17:37.573 { 00:17:37.573 "dma_device_id": "system", 00:17:37.573 "dma_device_type": 1 00:17:37.573 }, 00:17:37.573 { 00:17:37.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.573 "dma_device_type": 2 00:17:37.573 } 00:17:37.573 ], 00:17:37.573 "driver_specific": {} 00:17:37.573 } 00:17:37.573 ] 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.573 "name": "Existed_Raid", 00:17:37.573 "uuid": "b787e09b-af1c-4924-80f1-6b07ed5cde05", 00:17:37.573 "strip_size_kb": 64, 00:17:37.573 "state": "configuring", 00:17:37.573 "raid_level": "raid5f", 00:17:37.573 "superblock": true, 00:17:37.573 "num_base_bdevs": 3, 00:17:37.573 "num_base_bdevs_discovered": 2, 00:17:37.573 "num_base_bdevs_operational": 3, 00:17:37.573 "base_bdevs_list": [ 00:17:37.573 { 00:17:37.573 "name": "BaseBdev1", 00:17:37.573 "uuid": "bf514877-cd73-4b5a-b2b6-e8e92d61a553", 00:17:37.573 "is_configured": true, 00:17:37.573 "data_offset": 2048, 00:17:37.573 "data_size": 63488 00:17:37.573 }, 00:17:37.573 { 00:17:37.573 "name": "BaseBdev2", 00:17:37.573 "uuid": "3fef8865-fb60-4fd0-8fd9-d54d805ff8d5", 00:17:37.573 "is_configured": true, 00:17:37.573 "data_offset": 2048, 00:17:37.573 "data_size": 63488 00:17:37.573 }, 00:17:37.573 { 00:17:37.573 "name": "BaseBdev3", 00:17:37.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.573 "is_configured": false, 00:17:37.573 "data_offset": 0, 00:17:37.573 "data_size": 0 00:17:37.573 } 00:17:37.573 ] 00:17:37.573 }' 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.573 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 [2024-11-20 08:51:08.869841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.140 [2024-11-20 08:51:08.870218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:38.140 [2024-11-20 08:51:08.870252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:38.140 BaseBdev3 00:17:38.140 [2024-11-20 08:51:08.870586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 [2024-11-20 08:51:08.875871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:38.140 [2024-11-20 08:51:08.876041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:38.140 [2024-11-20 08:51:08.876431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 [ 00:17:38.140 { 00:17:38.140 "name": "BaseBdev3", 00:17:38.140 "aliases": [ 00:17:38.140 "cdeb043e-aeaa-4504-93e9-35fca91a25ab" 00:17:38.140 ], 00:17:38.140 "product_name": "Malloc disk", 00:17:38.140 "block_size": 512, 00:17:38.140 "num_blocks": 65536, 00:17:38.140 "uuid": "cdeb043e-aeaa-4504-93e9-35fca91a25ab", 00:17:38.140 "assigned_rate_limits": { 00:17:38.140 "rw_ios_per_sec": 0, 00:17:38.140 "rw_mbytes_per_sec": 0, 00:17:38.140 "r_mbytes_per_sec": 0, 00:17:38.140 "w_mbytes_per_sec": 0 00:17:38.140 }, 00:17:38.140 "claimed": true, 00:17:38.140 "claim_type": "exclusive_write", 00:17:38.140 "zoned": false, 00:17:38.140 "supported_io_types": { 00:17:38.140 "read": true, 00:17:38.140 "write": true, 00:17:38.140 "unmap": true, 00:17:38.140 "flush": true, 00:17:38.140 "reset": true, 00:17:38.140 "nvme_admin": false, 00:17:38.140 "nvme_io": false, 00:17:38.140 "nvme_io_md": false, 00:17:38.140 "write_zeroes": true, 00:17:38.140 "zcopy": true, 00:17:38.140 "get_zone_info": false, 00:17:38.140 "zone_management": false, 00:17:38.140 "zone_append": false, 00:17:38.140 "compare": false, 00:17:38.140 "compare_and_write": false, 00:17:38.140 "abort": true, 00:17:38.140 "seek_hole": false, 00:17:38.140 "seek_data": false, 00:17:38.140 "copy": true, 00:17:38.140 "nvme_iov_md": false 00:17:38.140 }, 00:17:38.140 "memory_domains": [ 00:17:38.140 { 00:17:38.140 "dma_device_id": "system", 00:17:38.140 "dma_device_type": 1 00:17:38.140 }, 00:17:38.140 { 00:17:38.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.140 "dma_device_type": 2 00:17:38.140 } 00:17:38.140 ], 00:17:38.140 "driver_specific": {} 00:17:38.140 } 00:17:38.140 ] 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.141 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.141 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.141 "name": "Existed_Raid", 00:17:38.141 "uuid": "b787e09b-af1c-4924-80f1-6b07ed5cde05", 00:17:38.141 "strip_size_kb": 64, 00:17:38.141 "state": "online", 00:17:38.141 "raid_level": "raid5f", 00:17:38.141 "superblock": true, 00:17:38.141 "num_base_bdevs": 3, 00:17:38.141 "num_base_bdevs_discovered": 3, 00:17:38.141 "num_base_bdevs_operational": 3, 00:17:38.141 "base_bdevs_list": [ 00:17:38.141 { 00:17:38.141 "name": "BaseBdev1", 00:17:38.141 "uuid": "bf514877-cd73-4b5a-b2b6-e8e92d61a553", 00:17:38.141 "is_configured": true, 00:17:38.141 "data_offset": 2048, 00:17:38.141 "data_size": 63488 00:17:38.141 }, 00:17:38.141 { 00:17:38.141 "name": "BaseBdev2", 00:17:38.141 "uuid": "3fef8865-fb60-4fd0-8fd9-d54d805ff8d5", 00:17:38.141 "is_configured": true, 00:17:38.141 "data_offset": 2048, 00:17:38.141 "data_size": 63488 00:17:38.141 }, 00:17:38.141 { 00:17:38.141 "name": "BaseBdev3", 00:17:38.141 "uuid": "cdeb043e-aeaa-4504-93e9-35fca91a25ab", 00:17:38.141 "is_configured": true, 00:17:38.141 "data_offset": 2048, 00:17:38.141 "data_size": 63488 00:17:38.141 } 00:17:38.141 ] 00:17:38.141 }' 00:17:38.141 08:51:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.141 08:51:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:38.707 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 [2024-11-20 08:51:09.398461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:38.708 "name": "Existed_Raid", 00:17:38.708 "aliases": [ 00:17:38.708 "b787e09b-af1c-4924-80f1-6b07ed5cde05" 00:17:38.708 ], 00:17:38.708 "product_name": "Raid Volume", 00:17:38.708 "block_size": 512, 00:17:38.708 "num_blocks": 126976, 00:17:38.708 "uuid": "b787e09b-af1c-4924-80f1-6b07ed5cde05", 00:17:38.708 "assigned_rate_limits": { 00:17:38.708 "rw_ios_per_sec": 0, 00:17:38.708 "rw_mbytes_per_sec": 0, 00:17:38.708 "r_mbytes_per_sec": 0, 00:17:38.708 "w_mbytes_per_sec": 0 00:17:38.708 }, 00:17:38.708 "claimed": false, 00:17:38.708 "zoned": false, 00:17:38.708 "supported_io_types": { 00:17:38.708 "read": true, 00:17:38.708 "write": true, 00:17:38.708 "unmap": false, 00:17:38.708 "flush": false, 00:17:38.708 "reset": true, 00:17:38.708 "nvme_admin": false, 00:17:38.708 "nvme_io": false, 00:17:38.708 "nvme_io_md": false, 00:17:38.708 "write_zeroes": true, 00:17:38.708 "zcopy": false, 00:17:38.708 "get_zone_info": false, 00:17:38.708 "zone_management": false, 00:17:38.708 "zone_append": false, 00:17:38.708 "compare": false, 00:17:38.708 "compare_and_write": false, 00:17:38.708 "abort": false, 00:17:38.708 "seek_hole": false, 00:17:38.708 "seek_data": false, 00:17:38.708 "copy": false, 00:17:38.708 "nvme_iov_md": false 00:17:38.708 }, 00:17:38.708 "driver_specific": { 00:17:38.708 "raid": { 00:17:38.708 "uuid": "b787e09b-af1c-4924-80f1-6b07ed5cde05", 00:17:38.708 "strip_size_kb": 64, 00:17:38.708 "state": "online", 00:17:38.708 "raid_level": "raid5f", 00:17:38.708 "superblock": true, 00:17:38.708 "num_base_bdevs": 3, 00:17:38.708 "num_base_bdevs_discovered": 3, 00:17:38.708 "num_base_bdevs_operational": 3, 00:17:38.708 "base_bdevs_list": [ 00:17:38.708 { 00:17:38.708 "name": "BaseBdev1", 00:17:38.708 "uuid": "bf514877-cd73-4b5a-b2b6-e8e92d61a553", 00:17:38.708 "is_configured": true, 00:17:38.708 "data_offset": 2048, 00:17:38.708 "data_size": 63488 00:17:38.708 }, 00:17:38.708 { 00:17:38.708 "name": "BaseBdev2", 00:17:38.708 "uuid": "3fef8865-fb60-4fd0-8fd9-d54d805ff8d5", 00:17:38.708 "is_configured": true, 00:17:38.708 "data_offset": 2048, 00:17:38.708 "data_size": 63488 00:17:38.708 }, 00:17:38.708 { 00:17:38.708 "name": "BaseBdev3", 00:17:38.708 "uuid": "cdeb043e-aeaa-4504-93e9-35fca91a25ab", 00:17:38.708 "is_configured": true, 00:17:38.708 "data_offset": 2048, 00:17:38.708 "data_size": 63488 00:17:38.708 } 00:17:38.708 ] 00:17:38.708 } 00:17:38.708 } 00:17:38.708 }' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:38.708 BaseBdev2 00:17:38.708 BaseBdev3' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.968 [2024-11-20 08:51:09.674364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.968 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.968 "name": "Existed_Raid", 00:17:38.968 "uuid": "b787e09b-af1c-4924-80f1-6b07ed5cde05", 00:17:38.968 "strip_size_kb": 64, 00:17:38.968 "state": "online", 00:17:38.968 "raid_level": "raid5f", 00:17:38.968 "superblock": true, 00:17:38.968 "num_base_bdevs": 3, 00:17:38.968 "num_base_bdevs_discovered": 2, 00:17:38.968 "num_base_bdevs_operational": 2, 00:17:38.968 "base_bdevs_list": [ 00:17:38.968 { 00:17:38.968 "name": null, 00:17:38.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.968 "is_configured": false, 00:17:38.968 "data_offset": 0, 00:17:38.969 "data_size": 63488 00:17:38.969 }, 00:17:38.969 { 00:17:38.969 "name": "BaseBdev2", 00:17:38.969 "uuid": "3fef8865-fb60-4fd0-8fd9-d54d805ff8d5", 00:17:38.969 "is_configured": true, 00:17:38.969 "data_offset": 2048, 00:17:38.969 "data_size": 63488 00:17:38.969 }, 00:17:38.969 { 00:17:38.969 "name": "BaseBdev3", 00:17:38.969 "uuid": "cdeb043e-aeaa-4504-93e9-35fca91a25ab", 00:17:38.969 "is_configured": true, 00:17:38.969 "data_offset": 2048, 00:17:38.969 "data_size": 63488 00:17:38.969 } 00:17:38.969 ] 00:17:38.969 }' 00:17:38.969 08:51:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.969 08:51:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.536 [2024-11-20 08:51:10.350849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.536 [2024-11-20 08:51:10.351044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.536 [2024-11-20 08:51:10.439663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.536 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.796 [2024-11-20 08:51:10.495694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:39.796 [2024-11-20 08:51:10.495763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.796 BaseBdev2 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.796 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.796 [ 00:17:39.796 { 00:17:39.796 "name": "BaseBdev2", 00:17:39.796 "aliases": [ 00:17:39.796 "6336a037-e876-4f08-bc3d-1776535ebb5f" 00:17:39.796 ], 00:17:39.796 "product_name": "Malloc disk", 00:17:39.796 "block_size": 512, 00:17:39.796 "num_blocks": 65536, 00:17:39.796 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:39.796 "assigned_rate_limits": { 00:17:39.796 "rw_ios_per_sec": 0, 00:17:39.796 "rw_mbytes_per_sec": 0, 00:17:39.796 "r_mbytes_per_sec": 0, 00:17:39.796 "w_mbytes_per_sec": 0 00:17:39.796 }, 00:17:39.796 "claimed": false, 00:17:39.796 "zoned": false, 00:17:39.796 "supported_io_types": { 00:17:39.796 "read": true, 00:17:39.796 "write": true, 00:17:39.796 "unmap": true, 00:17:39.796 "flush": true, 00:17:39.796 "reset": true, 00:17:39.796 "nvme_admin": false, 00:17:39.796 "nvme_io": false, 00:17:39.796 "nvme_io_md": false, 00:17:39.796 "write_zeroes": true, 00:17:39.796 "zcopy": true, 00:17:39.796 "get_zone_info": false, 00:17:39.796 "zone_management": false, 00:17:39.796 "zone_append": false, 00:17:39.796 "compare": false, 00:17:39.796 "compare_and_write": false, 00:17:39.796 "abort": true, 00:17:39.796 "seek_hole": false, 00:17:39.796 "seek_data": false, 00:17:39.796 "copy": true, 00:17:39.796 "nvme_iov_md": false 00:17:39.796 }, 00:17:39.796 "memory_domains": [ 00:17:39.796 { 00:17:39.796 "dma_device_id": "system", 00:17:39.796 "dma_device_type": 1 00:17:39.796 }, 00:17:39.796 { 00:17:39.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.796 "dma_device_type": 2 00:17:39.796 } 00:17:39.796 ], 00:17:39.796 "driver_specific": {} 00:17:39.796 } 00:17:40.055 ] 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.055 BaseBdev3 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.055 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.056 [ 00:17:40.056 { 00:17:40.056 "name": "BaseBdev3", 00:17:40.056 "aliases": [ 00:17:40.056 "ad9199b1-2338-4c12-b03c-d64f7fa3038c" 00:17:40.056 ], 00:17:40.056 "product_name": "Malloc disk", 00:17:40.056 "block_size": 512, 00:17:40.056 "num_blocks": 65536, 00:17:40.056 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:40.056 "assigned_rate_limits": { 00:17:40.056 "rw_ios_per_sec": 0, 00:17:40.056 "rw_mbytes_per_sec": 0, 00:17:40.056 "r_mbytes_per_sec": 0, 00:17:40.056 "w_mbytes_per_sec": 0 00:17:40.056 }, 00:17:40.056 "claimed": false, 00:17:40.056 "zoned": false, 00:17:40.056 "supported_io_types": { 00:17:40.056 "read": true, 00:17:40.056 "write": true, 00:17:40.056 "unmap": true, 00:17:40.056 "flush": true, 00:17:40.056 "reset": true, 00:17:40.056 "nvme_admin": false, 00:17:40.056 "nvme_io": false, 00:17:40.056 "nvme_io_md": false, 00:17:40.056 "write_zeroes": true, 00:17:40.056 "zcopy": true, 00:17:40.056 "get_zone_info": false, 00:17:40.056 "zone_management": false, 00:17:40.056 "zone_append": false, 00:17:40.056 "compare": false, 00:17:40.056 "compare_and_write": false, 00:17:40.056 "abort": true, 00:17:40.056 "seek_hole": false, 00:17:40.056 "seek_data": false, 00:17:40.056 "copy": true, 00:17:40.056 "nvme_iov_md": false 00:17:40.056 }, 00:17:40.056 "memory_domains": [ 00:17:40.056 { 00:17:40.056 "dma_device_id": "system", 00:17:40.056 "dma_device_type": 1 00:17:40.056 }, 00:17:40.056 { 00:17:40.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.056 "dma_device_type": 2 00:17:40.056 } 00:17:40.056 ], 00:17:40.056 "driver_specific": {} 00:17:40.056 } 00:17:40.056 ] 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.056 [2024-11-20 08:51:10.792751] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.056 [2024-11-20 08:51:10.792946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.056 [2024-11-20 08:51:10.793092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.056 [2024-11-20 08:51:10.795638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.056 "name": "Existed_Raid", 00:17:40.056 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:40.056 "strip_size_kb": 64, 00:17:40.056 "state": "configuring", 00:17:40.056 "raid_level": "raid5f", 00:17:40.056 "superblock": true, 00:17:40.056 "num_base_bdevs": 3, 00:17:40.056 "num_base_bdevs_discovered": 2, 00:17:40.056 "num_base_bdevs_operational": 3, 00:17:40.056 "base_bdevs_list": [ 00:17:40.056 { 00:17:40.056 "name": "BaseBdev1", 00:17:40.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.056 "is_configured": false, 00:17:40.056 "data_offset": 0, 00:17:40.056 "data_size": 0 00:17:40.056 }, 00:17:40.056 { 00:17:40.056 "name": "BaseBdev2", 00:17:40.056 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:40.056 "is_configured": true, 00:17:40.056 "data_offset": 2048, 00:17:40.056 "data_size": 63488 00:17:40.056 }, 00:17:40.056 { 00:17:40.056 "name": "BaseBdev3", 00:17:40.056 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:40.056 "is_configured": true, 00:17:40.056 "data_offset": 2048, 00:17:40.056 "data_size": 63488 00:17:40.056 } 00:17:40.056 ] 00:17:40.056 }' 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.056 08:51:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.625 [2024-11-20 08:51:11.296860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.625 "name": "Existed_Raid", 00:17:40.625 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:40.625 "strip_size_kb": 64, 00:17:40.625 "state": "configuring", 00:17:40.625 "raid_level": "raid5f", 00:17:40.625 "superblock": true, 00:17:40.625 "num_base_bdevs": 3, 00:17:40.625 "num_base_bdevs_discovered": 1, 00:17:40.625 "num_base_bdevs_operational": 3, 00:17:40.625 "base_bdevs_list": [ 00:17:40.625 { 00:17:40.625 "name": "BaseBdev1", 00:17:40.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.625 "is_configured": false, 00:17:40.625 "data_offset": 0, 00:17:40.625 "data_size": 0 00:17:40.625 }, 00:17:40.625 { 00:17:40.625 "name": null, 00:17:40.625 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:40.625 "is_configured": false, 00:17:40.625 "data_offset": 0, 00:17:40.625 "data_size": 63488 00:17:40.625 }, 00:17:40.625 { 00:17:40.625 "name": "BaseBdev3", 00:17:40.625 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:40.625 "is_configured": true, 00:17:40.625 "data_offset": 2048, 00:17:40.625 "data_size": 63488 00:17:40.625 } 00:17:40.625 ] 00:17:40.625 }' 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.625 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.193 [2024-11-20 08:51:11.915052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.193 BaseBdev1 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.193 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.193 [ 00:17:41.193 { 00:17:41.193 "name": "BaseBdev1", 00:17:41.193 "aliases": [ 00:17:41.193 "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1" 00:17:41.193 ], 00:17:41.193 "product_name": "Malloc disk", 00:17:41.193 "block_size": 512, 00:17:41.193 "num_blocks": 65536, 00:17:41.193 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:41.193 "assigned_rate_limits": { 00:17:41.193 "rw_ios_per_sec": 0, 00:17:41.193 "rw_mbytes_per_sec": 0, 00:17:41.193 "r_mbytes_per_sec": 0, 00:17:41.193 "w_mbytes_per_sec": 0 00:17:41.193 }, 00:17:41.193 "claimed": true, 00:17:41.193 "claim_type": "exclusive_write", 00:17:41.193 "zoned": false, 00:17:41.193 "supported_io_types": { 00:17:41.193 "read": true, 00:17:41.193 "write": true, 00:17:41.193 "unmap": true, 00:17:41.193 "flush": true, 00:17:41.193 "reset": true, 00:17:41.193 "nvme_admin": false, 00:17:41.193 "nvme_io": false, 00:17:41.193 "nvme_io_md": false, 00:17:41.193 "write_zeroes": true, 00:17:41.194 "zcopy": true, 00:17:41.194 "get_zone_info": false, 00:17:41.194 "zone_management": false, 00:17:41.194 "zone_append": false, 00:17:41.194 "compare": false, 00:17:41.194 "compare_and_write": false, 00:17:41.194 "abort": true, 00:17:41.194 "seek_hole": false, 00:17:41.194 "seek_data": false, 00:17:41.194 "copy": true, 00:17:41.194 "nvme_iov_md": false 00:17:41.194 }, 00:17:41.194 "memory_domains": [ 00:17:41.194 { 00:17:41.194 "dma_device_id": "system", 00:17:41.194 "dma_device_type": 1 00:17:41.194 }, 00:17:41.194 { 00:17:41.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.194 "dma_device_type": 2 00:17:41.194 } 00:17:41.194 ], 00:17:41.194 "driver_specific": {} 00:17:41.194 } 00:17:41.194 ] 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.194 08:51:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.194 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.194 "name": "Existed_Raid", 00:17:41.194 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:41.194 "strip_size_kb": 64, 00:17:41.194 "state": "configuring", 00:17:41.194 "raid_level": "raid5f", 00:17:41.194 "superblock": true, 00:17:41.194 "num_base_bdevs": 3, 00:17:41.194 "num_base_bdevs_discovered": 2, 00:17:41.194 "num_base_bdevs_operational": 3, 00:17:41.194 "base_bdevs_list": [ 00:17:41.194 { 00:17:41.194 "name": "BaseBdev1", 00:17:41.194 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:41.194 "is_configured": true, 00:17:41.194 "data_offset": 2048, 00:17:41.194 "data_size": 63488 00:17:41.194 }, 00:17:41.194 { 00:17:41.194 "name": null, 00:17:41.194 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:41.194 "is_configured": false, 00:17:41.194 "data_offset": 0, 00:17:41.194 "data_size": 63488 00:17:41.194 }, 00:17:41.194 { 00:17:41.194 "name": "BaseBdev3", 00:17:41.194 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:41.194 "is_configured": true, 00:17:41.194 "data_offset": 2048, 00:17:41.194 "data_size": 63488 00:17:41.194 } 00:17:41.194 ] 00:17:41.194 }' 00:17:41.194 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.194 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 [2024-11-20 08:51:12.547321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.762 "name": "Existed_Raid", 00:17:41.762 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:41.762 "strip_size_kb": 64, 00:17:41.762 "state": "configuring", 00:17:41.762 "raid_level": "raid5f", 00:17:41.762 "superblock": true, 00:17:41.762 "num_base_bdevs": 3, 00:17:41.762 "num_base_bdevs_discovered": 1, 00:17:41.762 "num_base_bdevs_operational": 3, 00:17:41.762 "base_bdevs_list": [ 00:17:41.762 { 00:17:41.762 "name": "BaseBdev1", 00:17:41.762 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:41.762 "is_configured": true, 00:17:41.762 "data_offset": 2048, 00:17:41.762 "data_size": 63488 00:17:41.762 }, 00:17:41.762 { 00:17:41.762 "name": null, 00:17:41.762 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:41.762 "is_configured": false, 00:17:41.762 "data_offset": 0, 00:17:41.762 "data_size": 63488 00:17:41.762 }, 00:17:41.762 { 00:17:41.762 "name": null, 00:17:41.762 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:41.762 "is_configured": false, 00:17:41.762 "data_offset": 0, 00:17:41.762 "data_size": 63488 00:17:41.762 } 00:17:41.762 ] 00:17:41.762 }' 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.762 08:51:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.339 [2024-11-20 08:51:13.115503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.339 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.340 "name": "Existed_Raid", 00:17:42.340 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:42.340 "strip_size_kb": 64, 00:17:42.340 "state": "configuring", 00:17:42.340 "raid_level": "raid5f", 00:17:42.340 "superblock": true, 00:17:42.340 "num_base_bdevs": 3, 00:17:42.340 "num_base_bdevs_discovered": 2, 00:17:42.340 "num_base_bdevs_operational": 3, 00:17:42.340 "base_bdevs_list": [ 00:17:42.340 { 00:17:42.340 "name": "BaseBdev1", 00:17:42.340 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:42.340 "is_configured": true, 00:17:42.340 "data_offset": 2048, 00:17:42.340 "data_size": 63488 00:17:42.340 }, 00:17:42.340 { 00:17:42.340 "name": null, 00:17:42.340 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:42.340 "is_configured": false, 00:17:42.340 "data_offset": 0, 00:17:42.340 "data_size": 63488 00:17:42.340 }, 00:17:42.340 { 00:17:42.340 "name": "BaseBdev3", 00:17:42.340 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:42.340 "is_configured": true, 00:17:42.340 "data_offset": 2048, 00:17:42.340 "data_size": 63488 00:17:42.340 } 00:17:42.340 ] 00:17:42.340 }' 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.340 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 [2024-11-20 08:51:13.695703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.935 "name": "Existed_Raid", 00:17:42.935 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:42.935 "strip_size_kb": 64, 00:17:42.935 "state": "configuring", 00:17:42.935 "raid_level": "raid5f", 00:17:42.935 "superblock": true, 00:17:42.935 "num_base_bdevs": 3, 00:17:42.935 "num_base_bdevs_discovered": 1, 00:17:42.935 "num_base_bdevs_operational": 3, 00:17:42.935 "base_bdevs_list": [ 00:17:42.935 { 00:17:42.935 "name": null, 00:17:42.935 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:42.935 "is_configured": false, 00:17:42.935 "data_offset": 0, 00:17:42.935 "data_size": 63488 00:17:42.935 }, 00:17:42.935 { 00:17:42.935 "name": null, 00:17:42.935 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:42.935 "is_configured": false, 00:17:42.935 "data_offset": 0, 00:17:42.935 "data_size": 63488 00:17:42.935 }, 00:17:42.935 { 00:17:42.935 "name": "BaseBdev3", 00:17:42.935 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:42.935 "is_configured": true, 00:17:42.935 "data_offset": 2048, 00:17:42.935 "data_size": 63488 00:17:42.935 } 00:17:42.935 ] 00:17:42.935 }' 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.935 08:51:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 [2024-11-20 08:51:14.365795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.504 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.764 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.764 "name": "Existed_Raid", 00:17:43.764 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:43.764 "strip_size_kb": 64, 00:17:43.764 "state": "configuring", 00:17:43.764 "raid_level": "raid5f", 00:17:43.764 "superblock": true, 00:17:43.764 "num_base_bdevs": 3, 00:17:43.764 "num_base_bdevs_discovered": 2, 00:17:43.764 "num_base_bdevs_operational": 3, 00:17:43.764 "base_bdevs_list": [ 00:17:43.764 { 00:17:43.764 "name": null, 00:17:43.764 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:43.764 "is_configured": false, 00:17:43.764 "data_offset": 0, 00:17:43.764 "data_size": 63488 00:17:43.764 }, 00:17:43.764 { 00:17:43.764 "name": "BaseBdev2", 00:17:43.764 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:43.764 "is_configured": true, 00:17:43.764 "data_offset": 2048, 00:17:43.764 "data_size": 63488 00:17:43.764 }, 00:17:43.764 { 00:17:43.764 "name": "BaseBdev3", 00:17:43.764 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:43.764 "is_configured": true, 00:17:43.764 "data_offset": 2048, 00:17:43.764 "data_size": 63488 00:17:43.764 } 00:17:43.764 ] 00:17:43.764 }' 00:17:43.764 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.764 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.023 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:44.023 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.023 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.023 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.023 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.283 08:51:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.283 [2024-11-20 08:51:15.035778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:44.284 [2024-11-20 08:51:15.036075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:44.284 [2024-11-20 08:51:15.036102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:44.284 [2024-11-20 08:51:15.036467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:44.284 NewBaseBdev 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.284 [2024-11-20 08:51:15.041323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:44.284 [2024-11-20 08:51:15.041351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:44.284 [2024-11-20 08:51:15.041543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.284 [ 00:17:44.284 { 00:17:44.284 "name": "NewBaseBdev", 00:17:44.284 "aliases": [ 00:17:44.284 "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1" 00:17:44.284 ], 00:17:44.284 "product_name": "Malloc disk", 00:17:44.284 "block_size": 512, 00:17:44.284 "num_blocks": 65536, 00:17:44.284 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:44.284 "assigned_rate_limits": { 00:17:44.284 "rw_ios_per_sec": 0, 00:17:44.284 "rw_mbytes_per_sec": 0, 00:17:44.284 "r_mbytes_per_sec": 0, 00:17:44.284 "w_mbytes_per_sec": 0 00:17:44.284 }, 00:17:44.284 "claimed": true, 00:17:44.284 "claim_type": "exclusive_write", 00:17:44.284 "zoned": false, 00:17:44.284 "supported_io_types": { 00:17:44.284 "read": true, 00:17:44.284 "write": true, 00:17:44.284 "unmap": true, 00:17:44.284 "flush": true, 00:17:44.284 "reset": true, 00:17:44.284 "nvme_admin": false, 00:17:44.284 "nvme_io": false, 00:17:44.284 "nvme_io_md": false, 00:17:44.284 "write_zeroes": true, 00:17:44.284 "zcopy": true, 00:17:44.284 "get_zone_info": false, 00:17:44.284 "zone_management": false, 00:17:44.284 "zone_append": false, 00:17:44.284 "compare": false, 00:17:44.284 "compare_and_write": false, 00:17:44.284 "abort": true, 00:17:44.284 "seek_hole": false, 00:17:44.284 "seek_data": false, 00:17:44.284 "copy": true, 00:17:44.284 "nvme_iov_md": false 00:17:44.284 }, 00:17:44.284 "memory_domains": [ 00:17:44.284 { 00:17:44.284 "dma_device_id": "system", 00:17:44.284 "dma_device_type": 1 00:17:44.284 }, 00:17:44.284 { 00:17:44.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.284 "dma_device_type": 2 00:17:44.284 } 00:17:44.284 ], 00:17:44.284 "driver_specific": {} 00:17:44.284 } 00:17:44.284 ] 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.284 "name": "Existed_Raid", 00:17:44.284 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:44.284 "strip_size_kb": 64, 00:17:44.284 "state": "online", 00:17:44.284 "raid_level": "raid5f", 00:17:44.284 "superblock": true, 00:17:44.284 "num_base_bdevs": 3, 00:17:44.284 "num_base_bdevs_discovered": 3, 00:17:44.284 "num_base_bdevs_operational": 3, 00:17:44.284 "base_bdevs_list": [ 00:17:44.284 { 00:17:44.284 "name": "NewBaseBdev", 00:17:44.284 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:44.284 "is_configured": true, 00:17:44.284 "data_offset": 2048, 00:17:44.284 "data_size": 63488 00:17:44.284 }, 00:17:44.284 { 00:17:44.284 "name": "BaseBdev2", 00:17:44.284 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:44.284 "is_configured": true, 00:17:44.284 "data_offset": 2048, 00:17:44.284 "data_size": 63488 00:17:44.284 }, 00:17:44.284 { 00:17:44.284 "name": "BaseBdev3", 00:17:44.284 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:44.284 "is_configured": true, 00:17:44.284 "data_offset": 2048, 00:17:44.284 "data_size": 63488 00:17:44.284 } 00:17:44.284 ] 00:17:44.284 }' 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.284 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.853 [2024-11-20 08:51:15.567460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.853 "name": "Existed_Raid", 00:17:44.853 "aliases": [ 00:17:44.853 "e007b127-b814-44b8-930e-8573ac1c05df" 00:17:44.853 ], 00:17:44.853 "product_name": "Raid Volume", 00:17:44.853 "block_size": 512, 00:17:44.853 "num_blocks": 126976, 00:17:44.853 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:44.853 "assigned_rate_limits": { 00:17:44.853 "rw_ios_per_sec": 0, 00:17:44.853 "rw_mbytes_per_sec": 0, 00:17:44.853 "r_mbytes_per_sec": 0, 00:17:44.853 "w_mbytes_per_sec": 0 00:17:44.853 }, 00:17:44.853 "claimed": false, 00:17:44.853 "zoned": false, 00:17:44.853 "supported_io_types": { 00:17:44.853 "read": true, 00:17:44.853 "write": true, 00:17:44.853 "unmap": false, 00:17:44.853 "flush": false, 00:17:44.853 "reset": true, 00:17:44.853 "nvme_admin": false, 00:17:44.853 "nvme_io": false, 00:17:44.853 "nvme_io_md": false, 00:17:44.853 "write_zeroes": true, 00:17:44.853 "zcopy": false, 00:17:44.853 "get_zone_info": false, 00:17:44.853 "zone_management": false, 00:17:44.853 "zone_append": false, 00:17:44.853 "compare": false, 00:17:44.853 "compare_and_write": false, 00:17:44.853 "abort": false, 00:17:44.853 "seek_hole": false, 00:17:44.853 "seek_data": false, 00:17:44.853 "copy": false, 00:17:44.853 "nvme_iov_md": false 00:17:44.853 }, 00:17:44.853 "driver_specific": { 00:17:44.853 "raid": { 00:17:44.853 "uuid": "e007b127-b814-44b8-930e-8573ac1c05df", 00:17:44.853 "strip_size_kb": 64, 00:17:44.853 "state": "online", 00:17:44.853 "raid_level": "raid5f", 00:17:44.853 "superblock": true, 00:17:44.853 "num_base_bdevs": 3, 00:17:44.853 "num_base_bdevs_discovered": 3, 00:17:44.853 "num_base_bdevs_operational": 3, 00:17:44.853 "base_bdevs_list": [ 00:17:44.853 { 00:17:44.853 "name": "NewBaseBdev", 00:17:44.853 "uuid": "3c2efeda-f67b-4d20-8c68-f3f4b6bb5bf1", 00:17:44.853 "is_configured": true, 00:17:44.853 "data_offset": 2048, 00:17:44.853 "data_size": 63488 00:17:44.853 }, 00:17:44.853 { 00:17:44.853 "name": "BaseBdev2", 00:17:44.853 "uuid": "6336a037-e876-4f08-bc3d-1776535ebb5f", 00:17:44.853 "is_configured": true, 00:17:44.853 "data_offset": 2048, 00:17:44.853 "data_size": 63488 00:17:44.853 }, 00:17:44.853 { 00:17:44.853 "name": "BaseBdev3", 00:17:44.853 "uuid": "ad9199b1-2338-4c12-b03c-d64f7fa3038c", 00:17:44.853 "is_configured": true, 00:17:44.853 "data_offset": 2048, 00:17:44.853 "data_size": 63488 00:17:44.853 } 00:17:44.853 ] 00:17:44.853 } 00:17:44.853 } 00:17:44.853 }' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:44.853 BaseBdev2 00:17:44.853 BaseBdev3' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:44.853 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.112 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:45.112 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.112 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.112 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.112 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.112 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.113 [2024-11-20 08:51:15.899307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.113 [2024-11-20 08:51:15.899340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.113 [2024-11-20 08:51:15.899432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.113 [2024-11-20 08:51:15.899791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.113 [2024-11-20 08:51:15.899814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80870 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80870 ']' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80870 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80870 00:17:45.113 killing process with pid 80870 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80870' 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80870 00:17:45.113 [2024-11-20 08:51:15.938017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.113 08:51:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80870 00:17:45.372 [2024-11-20 08:51:16.204241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.750 08:51:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:46.750 00:17:46.750 real 0m11.738s 00:17:46.750 user 0m19.442s 00:17:46.750 sys 0m1.703s 00:17:46.751 08:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.751 ************************************ 00:17:46.751 END TEST raid5f_state_function_test_sb 00:17:46.751 ************************************ 00:17:46.751 08:51:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.751 08:51:17 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:46.751 08:51:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:46.751 08:51:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.751 08:51:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.751 ************************************ 00:17:46.751 START TEST raid5f_superblock_test 00:17:46.751 ************************************ 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:46.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81503 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81503 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81503 ']' 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.751 08:51:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.751 [2024-11-20 08:51:17.385639] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:46.751 [2024-11-20 08:51:17.385799] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81503 ] 00:17:46.751 [2024-11-20 08:51:17.561064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.010 [2024-11-20 08:51:17.691888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.010 [2024-11-20 08:51:17.894422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.010 [2024-11-20 08:51:17.894500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:47.578 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.579 malloc1 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.579 [2024-11-20 08:51:18.427107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.579 [2024-11-20 08:51:18.427206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.579 [2024-11-20 08:51:18.427244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.579 [2024-11-20 08:51:18.427261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.579 [2024-11-20 08:51:18.430057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.579 [2024-11-20 08:51:18.430261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.579 pt1 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.579 malloc2 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.579 [2024-11-20 08:51:18.483250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.579 [2024-11-20 08:51:18.483458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.579 [2024-11-20 08:51:18.483536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.579 [2024-11-20 08:51:18.483650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.579 [2024-11-20 08:51:18.486475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.579 [2024-11-20 08:51:18.486661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.579 pt2 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.579 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 malloc3 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 [2024-11-20 08:51:18.552400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:47.839 [2024-11-20 08:51:18.552608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.839 [2024-11-20 08:51:18.552691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.839 [2024-11-20 08:51:18.552902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.839 [2024-11-20 08:51:18.555680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.839 [2024-11-20 08:51:18.555841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:47.839 pt3 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 [2024-11-20 08:51:18.564621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.839 [2024-11-20 08:51:18.567021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.839 [2024-11-20 08:51:18.567118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:47.839 [2024-11-20 08:51:18.567368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:47.839 [2024-11-20 08:51:18.567399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:47.839 [2024-11-20 08:51:18.567714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:47.839 [2024-11-20 08:51:18.572973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:47.839 [2024-11-20 08:51:18.573117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:47.839 [2024-11-20 08:51:18.573571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.839 "name": "raid_bdev1", 00:17:47.839 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:47.839 "strip_size_kb": 64, 00:17:47.839 "state": "online", 00:17:47.839 "raid_level": "raid5f", 00:17:47.839 "superblock": true, 00:17:47.839 "num_base_bdevs": 3, 00:17:47.839 "num_base_bdevs_discovered": 3, 00:17:47.839 "num_base_bdevs_operational": 3, 00:17:47.839 "base_bdevs_list": [ 00:17:47.839 { 00:17:47.839 "name": "pt1", 00:17:47.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.839 "is_configured": true, 00:17:47.839 "data_offset": 2048, 00:17:47.839 "data_size": 63488 00:17:47.839 }, 00:17:47.839 { 00:17:47.839 "name": "pt2", 00:17:47.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.839 "is_configured": true, 00:17:47.839 "data_offset": 2048, 00:17:47.839 "data_size": 63488 00:17:47.839 }, 00:17:47.839 { 00:17:47.839 "name": "pt3", 00:17:47.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.839 "is_configured": true, 00:17:47.839 "data_offset": 2048, 00:17:47.839 "data_size": 63488 00:17:47.839 } 00:17:47.839 ] 00:17:47.839 }' 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.839 08:51:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.448 [2024-11-20 08:51:19.127718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.448 "name": "raid_bdev1", 00:17:48.448 "aliases": [ 00:17:48.448 "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f" 00:17:48.448 ], 00:17:48.448 "product_name": "Raid Volume", 00:17:48.448 "block_size": 512, 00:17:48.448 "num_blocks": 126976, 00:17:48.448 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:48.448 "assigned_rate_limits": { 00:17:48.448 "rw_ios_per_sec": 0, 00:17:48.448 "rw_mbytes_per_sec": 0, 00:17:48.448 "r_mbytes_per_sec": 0, 00:17:48.448 "w_mbytes_per_sec": 0 00:17:48.448 }, 00:17:48.448 "claimed": false, 00:17:48.448 "zoned": false, 00:17:48.448 "supported_io_types": { 00:17:48.448 "read": true, 00:17:48.448 "write": true, 00:17:48.448 "unmap": false, 00:17:48.448 "flush": false, 00:17:48.448 "reset": true, 00:17:48.448 "nvme_admin": false, 00:17:48.448 "nvme_io": false, 00:17:48.448 "nvme_io_md": false, 00:17:48.448 "write_zeroes": true, 00:17:48.448 "zcopy": false, 00:17:48.448 "get_zone_info": false, 00:17:48.448 "zone_management": false, 00:17:48.448 "zone_append": false, 00:17:48.448 "compare": false, 00:17:48.448 "compare_and_write": false, 00:17:48.448 "abort": false, 00:17:48.448 "seek_hole": false, 00:17:48.448 "seek_data": false, 00:17:48.448 "copy": false, 00:17:48.448 "nvme_iov_md": false 00:17:48.448 }, 00:17:48.448 "driver_specific": { 00:17:48.448 "raid": { 00:17:48.448 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:48.448 "strip_size_kb": 64, 00:17:48.448 "state": "online", 00:17:48.448 "raid_level": "raid5f", 00:17:48.448 "superblock": true, 00:17:48.448 "num_base_bdevs": 3, 00:17:48.448 "num_base_bdevs_discovered": 3, 00:17:48.448 "num_base_bdevs_operational": 3, 00:17:48.448 "base_bdevs_list": [ 00:17:48.448 { 00:17:48.448 "name": "pt1", 00:17:48.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 }, 00:17:48.448 { 00:17:48.448 "name": "pt2", 00:17:48.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 }, 00:17:48.448 { 00:17:48.448 "name": "pt3", 00:17:48.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.448 "is_configured": true, 00:17:48.448 "data_offset": 2048, 00:17:48.448 "data_size": 63488 00:17:48.448 } 00:17:48.448 ] 00:17:48.448 } 00:17:48.448 } 00:17:48.448 }' 00:17:48.448 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:48.449 pt2 00:17:48.449 pt3' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.449 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 [2024-11-20 08:51:19.427753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f ']' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 [2024-11-20 08:51:19.483544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.709 [2024-11-20 08:51:19.483582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.709 [2024-11-20 08:51:19.483677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.709 [2024-11-20 08:51:19.483777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.709 [2024-11-20 08:51:19.483795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.709 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.968 [2024-11-20 08:51:19.623647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:48.968 [2024-11-20 08:51:19.626194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:48.968 [2024-11-20 08:51:19.626400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:48.968 [2024-11-20 08:51:19.626489] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:48.968 [2024-11-20 08:51:19.626568] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:48.968 [2024-11-20 08:51:19.626603] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:48.968 [2024-11-20 08:51:19.626632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.968 [2024-11-20 08:51:19.626646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:48.968 request: 00:17:48.968 { 00:17:48.968 "name": "raid_bdev1", 00:17:48.968 "raid_level": "raid5f", 00:17:48.968 "base_bdevs": [ 00:17:48.968 "malloc1", 00:17:48.968 "malloc2", 00:17:48.968 "malloc3" 00:17:48.968 ], 00:17:48.968 "strip_size_kb": 64, 00:17:48.968 "superblock": false, 00:17:48.968 "method": "bdev_raid_create", 00:17:48.968 "req_id": 1 00:17:48.968 } 00:17:48.968 Got JSON-RPC error response 00:17:48.968 response: 00:17:48.968 { 00:17:48.968 "code": -17, 00:17:48.968 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:48.968 } 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.968 [2024-11-20 08:51:19.687585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.968 [2024-11-20 08:51:19.687777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.968 [2024-11-20 08:51:19.687854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:48.968 [2024-11-20 08:51:19.687965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.968 [2024-11-20 08:51:19.690880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.968 [2024-11-20 08:51:19.691041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.968 [2024-11-20 08:51:19.691304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:48.968 [2024-11-20 08:51:19.691483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.968 pt1 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.968 "name": "raid_bdev1", 00:17:48.968 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:48.968 "strip_size_kb": 64, 00:17:48.968 "state": "configuring", 00:17:48.968 "raid_level": "raid5f", 00:17:48.968 "superblock": true, 00:17:48.968 "num_base_bdevs": 3, 00:17:48.968 "num_base_bdevs_discovered": 1, 00:17:48.968 "num_base_bdevs_operational": 3, 00:17:48.968 "base_bdevs_list": [ 00:17:48.968 { 00:17:48.968 "name": "pt1", 00:17:48.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.968 "is_configured": true, 00:17:48.968 "data_offset": 2048, 00:17:48.968 "data_size": 63488 00:17:48.968 }, 00:17:48.968 { 00:17:48.968 "name": null, 00:17:48.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.968 "is_configured": false, 00:17:48.968 "data_offset": 2048, 00:17:48.968 "data_size": 63488 00:17:48.968 }, 00:17:48.968 { 00:17:48.968 "name": null, 00:17:48.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.968 "is_configured": false, 00:17:48.968 "data_offset": 2048, 00:17:48.968 "data_size": 63488 00:17:48.968 } 00:17:48.968 ] 00:17:48.968 }' 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.968 08:51:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.535 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:49.535 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.535 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.535 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.535 [2024-11-20 08:51:20.220012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.535 [2024-11-20 08:51:20.220095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.535 [2024-11-20 08:51:20.220130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:49.535 [2024-11-20 08:51:20.220160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.535 [2024-11-20 08:51:20.220730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.535 [2024-11-20 08:51:20.220765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.535 [2024-11-20 08:51:20.220874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.536 [2024-11-20 08:51:20.220915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.536 pt2 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.536 [2024-11-20 08:51:20.227995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.536 "name": "raid_bdev1", 00:17:49.536 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:49.536 "strip_size_kb": 64, 00:17:49.536 "state": "configuring", 00:17:49.536 "raid_level": "raid5f", 00:17:49.536 "superblock": true, 00:17:49.536 "num_base_bdevs": 3, 00:17:49.536 "num_base_bdevs_discovered": 1, 00:17:49.536 "num_base_bdevs_operational": 3, 00:17:49.536 "base_bdevs_list": [ 00:17:49.536 { 00:17:49.536 "name": "pt1", 00:17:49.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.536 "is_configured": true, 00:17:49.536 "data_offset": 2048, 00:17:49.536 "data_size": 63488 00:17:49.536 }, 00:17:49.536 { 00:17:49.536 "name": null, 00:17:49.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.536 "is_configured": false, 00:17:49.536 "data_offset": 0, 00:17:49.536 "data_size": 63488 00:17:49.536 }, 00:17:49.536 { 00:17:49.536 "name": null, 00:17:49.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.536 "is_configured": false, 00:17:49.536 "data_offset": 2048, 00:17:49.536 "data_size": 63488 00:17:49.536 } 00:17:49.536 ] 00:17:49.536 }' 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.536 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 [2024-11-20 08:51:20.784120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.104 [2024-11-20 08:51:20.784221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.104 [2024-11-20 08:51:20.784261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:50.104 [2024-11-20 08:51:20.784282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.104 [2024-11-20 08:51:20.784856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.104 [2024-11-20 08:51:20.784887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.104 [2024-11-20 08:51:20.784989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.104 [2024-11-20 08:51:20.785039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.104 pt2 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 [2024-11-20 08:51:20.792109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:50.104 [2024-11-20 08:51:20.792346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.104 [2024-11-20 08:51:20.792381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:50.104 [2024-11-20 08:51:20.792400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.104 [2024-11-20 08:51:20.792909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.104 [2024-11-20 08:51:20.792952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:50.104 [2024-11-20 08:51:20.793043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:50.104 [2024-11-20 08:51:20.793081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:50.104 [2024-11-20 08:51:20.793260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:50.104 [2024-11-20 08:51:20.793288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:50.104 [2024-11-20 08:51:20.793592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:50.104 [2024-11-20 08:51:20.798496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:50.104 [2024-11-20 08:51:20.798523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:50.104 [2024-11-20 08:51:20.798768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.104 pt3 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.104 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.104 "name": "raid_bdev1", 00:17:50.104 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:50.104 "strip_size_kb": 64, 00:17:50.104 "state": "online", 00:17:50.104 "raid_level": "raid5f", 00:17:50.104 "superblock": true, 00:17:50.104 "num_base_bdevs": 3, 00:17:50.104 "num_base_bdevs_discovered": 3, 00:17:50.104 "num_base_bdevs_operational": 3, 00:17:50.104 "base_bdevs_list": [ 00:17:50.104 { 00:17:50.104 "name": "pt1", 00:17:50.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.104 "is_configured": true, 00:17:50.104 "data_offset": 2048, 00:17:50.104 "data_size": 63488 00:17:50.104 }, 00:17:50.104 { 00:17:50.104 "name": "pt2", 00:17:50.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.104 "is_configured": true, 00:17:50.104 "data_offset": 2048, 00:17:50.104 "data_size": 63488 00:17:50.104 }, 00:17:50.104 { 00:17:50.104 "name": "pt3", 00:17:50.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.104 "is_configured": true, 00:17:50.104 "data_offset": 2048, 00:17:50.104 "data_size": 63488 00:17:50.105 } 00:17:50.105 ] 00:17:50.105 }' 00:17:50.105 08:51:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.105 08:51:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.674 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 [2024-11-20 08:51:21.340794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.675 "name": "raid_bdev1", 00:17:50.675 "aliases": [ 00:17:50.675 "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f" 00:17:50.675 ], 00:17:50.675 "product_name": "Raid Volume", 00:17:50.675 "block_size": 512, 00:17:50.675 "num_blocks": 126976, 00:17:50.675 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:50.675 "assigned_rate_limits": { 00:17:50.675 "rw_ios_per_sec": 0, 00:17:50.675 "rw_mbytes_per_sec": 0, 00:17:50.675 "r_mbytes_per_sec": 0, 00:17:50.675 "w_mbytes_per_sec": 0 00:17:50.675 }, 00:17:50.675 "claimed": false, 00:17:50.675 "zoned": false, 00:17:50.675 "supported_io_types": { 00:17:50.675 "read": true, 00:17:50.675 "write": true, 00:17:50.675 "unmap": false, 00:17:50.675 "flush": false, 00:17:50.675 "reset": true, 00:17:50.675 "nvme_admin": false, 00:17:50.675 "nvme_io": false, 00:17:50.675 "nvme_io_md": false, 00:17:50.675 "write_zeroes": true, 00:17:50.675 "zcopy": false, 00:17:50.675 "get_zone_info": false, 00:17:50.675 "zone_management": false, 00:17:50.675 "zone_append": false, 00:17:50.675 "compare": false, 00:17:50.675 "compare_and_write": false, 00:17:50.675 "abort": false, 00:17:50.675 "seek_hole": false, 00:17:50.675 "seek_data": false, 00:17:50.675 "copy": false, 00:17:50.675 "nvme_iov_md": false 00:17:50.675 }, 00:17:50.675 "driver_specific": { 00:17:50.675 "raid": { 00:17:50.675 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:50.675 "strip_size_kb": 64, 00:17:50.675 "state": "online", 00:17:50.675 "raid_level": "raid5f", 00:17:50.675 "superblock": true, 00:17:50.675 "num_base_bdevs": 3, 00:17:50.675 "num_base_bdevs_discovered": 3, 00:17:50.675 "num_base_bdevs_operational": 3, 00:17:50.675 "base_bdevs_list": [ 00:17:50.675 { 00:17:50.675 "name": "pt1", 00:17:50.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.675 "is_configured": true, 00:17:50.675 "data_offset": 2048, 00:17:50.675 "data_size": 63488 00:17:50.675 }, 00:17:50.675 { 00:17:50.675 "name": "pt2", 00:17:50.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.675 "is_configured": true, 00:17:50.675 "data_offset": 2048, 00:17:50.675 "data_size": 63488 00:17:50.675 }, 00:17:50.675 { 00:17:50.675 "name": "pt3", 00:17:50.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.675 "is_configured": true, 00:17:50.675 "data_offset": 2048, 00:17:50.675 "data_size": 63488 00:17:50.675 } 00:17:50.675 ] 00:17:50.675 } 00:17:50.675 } 00:17:50.675 }' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.675 pt2 00:17:50.675 pt3' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.935 [2024-11-20 08:51:21.660821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f '!=' 48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f ']' 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.935 [2024-11-20 08:51:21.716691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.935 "name": "raid_bdev1", 00:17:50.935 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:50.935 "strip_size_kb": 64, 00:17:50.935 "state": "online", 00:17:50.935 "raid_level": "raid5f", 00:17:50.935 "superblock": true, 00:17:50.935 "num_base_bdevs": 3, 00:17:50.935 "num_base_bdevs_discovered": 2, 00:17:50.935 "num_base_bdevs_operational": 2, 00:17:50.935 "base_bdevs_list": [ 00:17:50.935 { 00:17:50.935 "name": null, 00:17:50.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.935 "is_configured": false, 00:17:50.935 "data_offset": 0, 00:17:50.935 "data_size": 63488 00:17:50.935 }, 00:17:50.935 { 00:17:50.935 "name": "pt2", 00:17:50.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.935 "is_configured": true, 00:17:50.935 "data_offset": 2048, 00:17:50.935 "data_size": 63488 00:17:50.935 }, 00:17:50.935 { 00:17:50.935 "name": "pt3", 00:17:50.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.935 "is_configured": true, 00:17:50.935 "data_offset": 2048, 00:17:50.935 "data_size": 63488 00:17:50.935 } 00:17:50.935 ] 00:17:50.935 }' 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.935 08:51:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 [2024-11-20 08:51:22.252805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.504 [2024-11-20 08:51:22.252970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.504 [2024-11-20 08:51:22.253106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.504 [2024-11-20 08:51:22.253207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.504 [2024-11-20 08:51:22.253235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 [2024-11-20 08:51:22.340793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.504 [2024-11-20 08:51:22.340890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.504 [2024-11-20 08:51:22.340918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:51.504 [2024-11-20 08:51:22.340935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.504 [2024-11-20 08:51:22.343905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.504 [2024-11-20 08:51:22.343960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.504 [2024-11-20 08:51:22.344079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.504 [2024-11-20 08:51:22.344146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.504 pt2 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.504 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.504 "name": "raid_bdev1", 00:17:51.504 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:51.504 "strip_size_kb": 64, 00:17:51.504 "state": "configuring", 00:17:51.504 "raid_level": "raid5f", 00:17:51.504 "superblock": true, 00:17:51.504 "num_base_bdevs": 3, 00:17:51.505 "num_base_bdevs_discovered": 1, 00:17:51.505 "num_base_bdevs_operational": 2, 00:17:51.505 "base_bdevs_list": [ 00:17:51.505 { 00:17:51.505 "name": null, 00:17:51.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.505 "is_configured": false, 00:17:51.505 "data_offset": 2048, 00:17:51.505 "data_size": 63488 00:17:51.505 }, 00:17:51.505 { 00:17:51.505 "name": "pt2", 00:17:51.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.505 "is_configured": true, 00:17:51.505 "data_offset": 2048, 00:17:51.505 "data_size": 63488 00:17:51.505 }, 00:17:51.505 { 00:17:51.505 "name": null, 00:17:51.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.505 "is_configured": false, 00:17:51.505 "data_offset": 2048, 00:17:51.505 "data_size": 63488 00:17:51.505 } 00:17:51.505 ] 00:17:51.505 }' 00:17:51.505 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.505 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.073 [2024-11-20 08:51:22.864930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:52.073 [2024-11-20 08:51:22.865022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.073 [2024-11-20 08:51:22.865057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:52.073 [2024-11-20 08:51:22.865075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.073 [2024-11-20 08:51:22.865672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.073 [2024-11-20 08:51:22.865715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:52.073 [2024-11-20 08:51:22.865816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:52.073 [2024-11-20 08:51:22.865865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:52.073 [2024-11-20 08:51:22.866017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.073 [2024-11-20 08:51:22.866046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:52.073 [2024-11-20 08:51:22.866371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:52.073 [2024-11-20 08:51:22.871296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.073 [2024-11-20 08:51:22.871322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:52.073 [2024-11-20 08:51:22.871691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.073 pt3 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.073 "name": "raid_bdev1", 00:17:52.073 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:52.073 "strip_size_kb": 64, 00:17:52.073 "state": "online", 00:17:52.073 "raid_level": "raid5f", 00:17:52.073 "superblock": true, 00:17:52.073 "num_base_bdevs": 3, 00:17:52.073 "num_base_bdevs_discovered": 2, 00:17:52.073 "num_base_bdevs_operational": 2, 00:17:52.073 "base_bdevs_list": [ 00:17:52.073 { 00:17:52.073 "name": null, 00:17:52.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.073 "is_configured": false, 00:17:52.073 "data_offset": 2048, 00:17:52.073 "data_size": 63488 00:17:52.073 }, 00:17:52.073 { 00:17:52.073 "name": "pt2", 00:17:52.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.073 "is_configured": true, 00:17:52.073 "data_offset": 2048, 00:17:52.073 "data_size": 63488 00:17:52.073 }, 00:17:52.073 { 00:17:52.073 "name": "pt3", 00:17:52.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.073 "is_configured": true, 00:17:52.073 "data_offset": 2048, 00:17:52.073 "data_size": 63488 00:17:52.073 } 00:17:52.073 ] 00:17:52.073 }' 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.073 08:51:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 [2024-11-20 08:51:23.445457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.642 [2024-11-20 08:51:23.445498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.642 [2024-11-20 08:51:23.445596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.642 [2024-11-20 08:51:23.445691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.642 [2024-11-20 08:51:23.445709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.642 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.643 [2024-11-20 08:51:23.517502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.643 [2024-11-20 08:51:23.517611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.643 [2024-11-20 08:51:23.517641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:52.643 [2024-11-20 08:51:23.517655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.643 [2024-11-20 08:51:23.520689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.643 [2024-11-20 08:51:23.520736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.643 [2024-11-20 08:51:23.520866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.643 [2024-11-20 08:51:23.520925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.643 [2024-11-20 08:51:23.521122] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:52.643 [2024-11-20 08:51:23.521168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.643 [2024-11-20 08:51:23.521198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:52.643 [2024-11-20 08:51:23.521280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.643 pt1 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.643 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.902 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.902 "name": "raid_bdev1", 00:17:52.902 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:52.902 "strip_size_kb": 64, 00:17:52.902 "state": "configuring", 00:17:52.902 "raid_level": "raid5f", 00:17:52.902 "superblock": true, 00:17:52.902 "num_base_bdevs": 3, 00:17:52.902 "num_base_bdevs_discovered": 1, 00:17:52.902 "num_base_bdevs_operational": 2, 00:17:52.902 "base_bdevs_list": [ 00:17:52.902 { 00:17:52.902 "name": null, 00:17:52.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.902 "is_configured": false, 00:17:52.902 "data_offset": 2048, 00:17:52.902 "data_size": 63488 00:17:52.902 }, 00:17:52.902 { 00:17:52.902 "name": "pt2", 00:17:52.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.902 "is_configured": true, 00:17:52.902 "data_offset": 2048, 00:17:52.902 "data_size": 63488 00:17:52.902 }, 00:17:52.902 { 00:17:52.902 "name": null, 00:17:52.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.902 "is_configured": false, 00:17:52.902 "data_offset": 2048, 00:17:52.902 "data_size": 63488 00:17:52.902 } 00:17:52.902 ] 00:17:52.902 }' 00:17:52.902 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.902 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.161 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:53.161 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.161 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.161 08:51:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:53.161 08:51:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.161 [2024-11-20 08:51:24.033653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.161 [2024-11-20 08:51:24.033872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.161 [2024-11-20 08:51:24.034037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:53.161 [2024-11-20 08:51:24.034237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.161 [2024-11-20 08:51:24.034987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.161 [2024-11-20 08:51:24.035033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.161 [2024-11-20 08:51:24.035162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:53.161 [2024-11-20 08:51:24.035201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:53.161 [2024-11-20 08:51:24.035360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:53.161 [2024-11-20 08:51:24.035383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:53.161 [2024-11-20 08:51:24.035714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:53.161 [2024-11-20 08:51:24.040766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:53.161 pt3 00:17:53.161 [2024-11-20 08:51:24.040945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:53.161 [2024-11-20 08:51:24.041286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.161 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.420 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.420 "name": "raid_bdev1", 00:17:53.420 "uuid": "48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f", 00:17:53.420 "strip_size_kb": 64, 00:17:53.420 "state": "online", 00:17:53.420 "raid_level": "raid5f", 00:17:53.420 "superblock": true, 00:17:53.420 "num_base_bdevs": 3, 00:17:53.420 "num_base_bdevs_discovered": 2, 00:17:53.420 "num_base_bdevs_operational": 2, 00:17:53.420 "base_bdevs_list": [ 00:17:53.420 { 00:17:53.420 "name": null, 00:17:53.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.420 "is_configured": false, 00:17:53.420 "data_offset": 2048, 00:17:53.420 "data_size": 63488 00:17:53.420 }, 00:17:53.420 { 00:17:53.420 "name": "pt2", 00:17:53.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.420 "is_configured": true, 00:17:53.420 "data_offset": 2048, 00:17:53.420 "data_size": 63488 00:17:53.420 }, 00:17:53.420 { 00:17:53.420 "name": "pt3", 00:17:53.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.420 "is_configured": true, 00:17:53.420 "data_offset": 2048, 00:17:53.420 "data_size": 63488 00:17:53.420 } 00:17:53.420 ] 00:17:53.420 }' 00:17:53.420 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.420 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.679 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.938 [2024-11-20 08:51:24.599267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f '!=' 48e80f1c-39f4-4ca1-8cd1-270b7d1ce44f ']' 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81503 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81503 ']' 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81503 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81503 00:17:53.938 killing process with pid 81503 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81503' 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81503 00:17:53.938 [2024-11-20 08:51:24.676580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.938 08:51:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81503 00:17:53.938 [2024-11-20 08:51:24.676691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.938 [2024-11-20 08:51:24.676773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.938 [2024-11-20 08:51:24.676801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:54.197 [2024-11-20 08:51:24.944843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.135 08:51:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:55.135 00:17:55.135 real 0m8.678s 00:17:55.135 user 0m14.240s 00:17:55.135 sys 0m1.216s 00:17:55.135 08:51:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.135 ************************************ 00:17:55.135 END TEST raid5f_superblock_test 00:17:55.135 ************************************ 00:17:55.135 08:51:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 08:51:26 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:55.135 08:51:26 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:55.135 08:51:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:55.135 08:51:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.135 08:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.135 ************************************ 00:17:55.135 START TEST raid5f_rebuild_test 00:17:55.135 ************************************ 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:55.135 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:55.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81953 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81953 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81953 ']' 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.136 08:51:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.394 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:55.394 Zero copy mechanism will not be used. 00:17:55.394 [2024-11-20 08:51:26.109317] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:17:55.395 [2024-11-20 08:51:26.109501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81953 ] 00:17:55.395 [2024-11-20 08:51:26.286272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.653 [2024-11-20 08:51:26.415243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.912 [2024-11-20 08:51:26.618531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.912 [2024-11-20 08:51:26.618605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 BaseBdev1_malloc 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 [2024-11-20 08:51:27.168665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.481 [2024-11-20 08:51:27.168975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.481 [2024-11-20 08:51:27.169048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.481 [2024-11-20 08:51:27.169087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.481 [2024-11-20 08:51:27.172100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.481 [2024-11-20 08:51:27.172314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.481 BaseBdev1 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 BaseBdev2_malloc 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 [2024-11-20 08:51:27.220839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:56.481 [2024-11-20 08:51:27.220919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.481 [2024-11-20 08:51:27.220950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.481 [2024-11-20 08:51:27.220970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.481 [2024-11-20 08:51:27.223769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.481 [2024-11-20 08:51:27.223963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:56.481 BaseBdev2 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 BaseBdev3_malloc 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.481 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 [2024-11-20 08:51:27.288599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:56.481 [2024-11-20 08:51:27.288809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.482 [2024-11-20 08:51:27.288853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:56.482 [2024-11-20 08:51:27.288874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.482 [2024-11-20 08:51:27.291612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.482 [2024-11-20 08:51:27.291667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:56.482 BaseBdev3 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.482 spare_malloc 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.482 spare_delay 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.482 [2024-11-20 08:51:27.352586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.482 [2024-11-20 08:51:27.352659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.482 [2024-11-20 08:51:27.352687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:56.482 [2024-11-20 08:51:27.352705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.482 [2024-11-20 08:51:27.355552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.482 [2024-11-20 08:51:27.355606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.482 spare 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.482 [2024-11-20 08:51:27.364669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.482 [2024-11-20 08:51:27.367054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.482 [2024-11-20 08:51:27.367295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.482 [2024-11-20 08:51:27.367449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.482 [2024-11-20 08:51:27.367469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:56.482 [2024-11-20 08:51:27.367801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.482 [2024-11-20 08:51:27.372920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.482 [2024-11-20 08:51:27.372953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.482 [2024-11-20 08:51:27.373203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.482 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.741 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.741 "name": "raid_bdev1", 00:17:56.741 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:17:56.741 "strip_size_kb": 64, 00:17:56.741 "state": "online", 00:17:56.741 "raid_level": "raid5f", 00:17:56.741 "superblock": false, 00:17:56.741 "num_base_bdevs": 3, 00:17:56.741 "num_base_bdevs_discovered": 3, 00:17:56.741 "num_base_bdevs_operational": 3, 00:17:56.741 "base_bdevs_list": [ 00:17:56.741 { 00:17:56.741 "name": "BaseBdev1", 00:17:56.741 "uuid": "cef4ab4a-83d3-5995-aa87-8740e86b0899", 00:17:56.741 "is_configured": true, 00:17:56.741 "data_offset": 0, 00:17:56.741 "data_size": 65536 00:17:56.741 }, 00:17:56.741 { 00:17:56.741 "name": "BaseBdev2", 00:17:56.741 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:17:56.741 "is_configured": true, 00:17:56.741 "data_offset": 0, 00:17:56.741 "data_size": 65536 00:17:56.741 }, 00:17:56.741 { 00:17:56.741 "name": "BaseBdev3", 00:17:56.741 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:17:56.741 "is_configured": true, 00:17:56.741 "data_offset": 0, 00:17:56.741 "data_size": 65536 00:17:56.741 } 00:17:56.741 ] 00:17:56.741 }' 00:17:56.741 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.741 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.000 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.000 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:57.000 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.000 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.000 [2024-11-20 08:51:27.883211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.000 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:57.258 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.259 08:51:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:57.518 [2024-11-20 08:51:28.255133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:57.518 /dev/nbd0 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.518 1+0 records in 00:17:57.518 1+0 records out 00:17:57.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442491 s, 9.3 MB/s 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:57.518 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:58.085 512+0 records in 00:17:58.085 512+0 records out 00:17:58.085 67108864 bytes (67 MB, 64 MiB) copied, 0.508811 s, 132 MB/s 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.086 08:51:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.345 [2024-11-20 08:51:29.124690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.345 [2024-11-20 08:51:29.142478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.345 "name": "raid_bdev1", 00:17:58.345 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:17:58.345 "strip_size_kb": 64, 00:17:58.345 "state": "online", 00:17:58.345 "raid_level": "raid5f", 00:17:58.345 "superblock": false, 00:17:58.345 "num_base_bdevs": 3, 00:17:58.345 "num_base_bdevs_discovered": 2, 00:17:58.345 "num_base_bdevs_operational": 2, 00:17:58.345 "base_bdevs_list": [ 00:17:58.345 { 00:17:58.345 "name": null, 00:17:58.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.345 "is_configured": false, 00:17:58.345 "data_offset": 0, 00:17:58.345 "data_size": 65536 00:17:58.345 }, 00:17:58.345 { 00:17:58.345 "name": "BaseBdev2", 00:17:58.345 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:17:58.345 "is_configured": true, 00:17:58.345 "data_offset": 0, 00:17:58.345 "data_size": 65536 00:17:58.345 }, 00:17:58.345 { 00:17:58.345 "name": "BaseBdev3", 00:17:58.345 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:17:58.345 "is_configured": true, 00:17:58.345 "data_offset": 0, 00:17:58.345 "data_size": 65536 00:17:58.345 } 00:17:58.345 ] 00:17:58.345 }' 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.345 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.913 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.913 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.913 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.913 [2024-11-20 08:51:29.650643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.913 [2024-11-20 08:51:29.666083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:58.913 08:51:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.913 08:51:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:58.913 [2024-11-20 08:51:29.673689] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.850 "name": "raid_bdev1", 00:17:59.850 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:17:59.850 "strip_size_kb": 64, 00:17:59.850 "state": "online", 00:17:59.850 "raid_level": "raid5f", 00:17:59.850 "superblock": false, 00:17:59.850 "num_base_bdevs": 3, 00:17:59.850 "num_base_bdevs_discovered": 3, 00:17:59.850 "num_base_bdevs_operational": 3, 00:17:59.850 "process": { 00:17:59.850 "type": "rebuild", 00:17:59.850 "target": "spare", 00:17:59.850 "progress": { 00:17:59.850 "blocks": 18432, 00:17:59.850 "percent": 14 00:17:59.850 } 00:17:59.850 }, 00:17:59.850 "base_bdevs_list": [ 00:17:59.850 { 00:17:59.850 "name": "spare", 00:17:59.850 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:17:59.850 "is_configured": true, 00:17:59.850 "data_offset": 0, 00:17:59.850 "data_size": 65536 00:17:59.850 }, 00:17:59.850 { 00:17:59.850 "name": "BaseBdev2", 00:17:59.850 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:17:59.850 "is_configured": true, 00:17:59.850 "data_offset": 0, 00:17:59.850 "data_size": 65536 00:17:59.850 }, 00:17:59.850 { 00:17:59.850 "name": "BaseBdev3", 00:17:59.850 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:17:59.850 "is_configured": true, 00:17:59.850 "data_offset": 0, 00:17:59.850 "data_size": 65536 00:17:59.850 } 00:17:59.850 ] 00:17:59.850 }' 00:17:59.850 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.109 [2024-11-20 08:51:30.832194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.109 [2024-11-20 08:51:30.888536] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:00.109 [2024-11-20 08:51:30.888619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.109 [2024-11-20 08:51:30.888649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.109 [2024-11-20 08:51:30.888662] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.109 "name": "raid_bdev1", 00:18:00.109 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:00.109 "strip_size_kb": 64, 00:18:00.109 "state": "online", 00:18:00.109 "raid_level": "raid5f", 00:18:00.109 "superblock": false, 00:18:00.109 "num_base_bdevs": 3, 00:18:00.109 "num_base_bdevs_discovered": 2, 00:18:00.109 "num_base_bdevs_operational": 2, 00:18:00.109 "base_bdevs_list": [ 00:18:00.109 { 00:18:00.109 "name": null, 00:18:00.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.109 "is_configured": false, 00:18:00.109 "data_offset": 0, 00:18:00.109 "data_size": 65536 00:18:00.109 }, 00:18:00.109 { 00:18:00.109 "name": "BaseBdev2", 00:18:00.109 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:00.109 "is_configured": true, 00:18:00.109 "data_offset": 0, 00:18:00.109 "data_size": 65536 00:18:00.109 }, 00:18:00.109 { 00:18:00.109 "name": "BaseBdev3", 00:18:00.109 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:00.109 "is_configured": true, 00:18:00.109 "data_offset": 0, 00:18:00.109 "data_size": 65536 00:18:00.109 } 00:18:00.109 ] 00:18:00.109 }' 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.109 08:51:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.700 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.701 "name": "raid_bdev1", 00:18:00.701 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:00.701 "strip_size_kb": 64, 00:18:00.701 "state": "online", 00:18:00.701 "raid_level": "raid5f", 00:18:00.701 "superblock": false, 00:18:00.701 "num_base_bdevs": 3, 00:18:00.701 "num_base_bdevs_discovered": 2, 00:18:00.701 "num_base_bdevs_operational": 2, 00:18:00.701 "base_bdevs_list": [ 00:18:00.701 { 00:18:00.701 "name": null, 00:18:00.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.701 "is_configured": false, 00:18:00.701 "data_offset": 0, 00:18:00.701 "data_size": 65536 00:18:00.701 }, 00:18:00.701 { 00:18:00.701 "name": "BaseBdev2", 00:18:00.701 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:00.701 "is_configured": true, 00:18:00.701 "data_offset": 0, 00:18:00.701 "data_size": 65536 00:18:00.701 }, 00:18:00.701 { 00:18:00.701 "name": "BaseBdev3", 00:18:00.701 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:00.701 "is_configured": true, 00:18:00.701 "data_offset": 0, 00:18:00.701 "data_size": 65536 00:18:00.701 } 00:18:00.701 ] 00:18:00.701 }' 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.701 [2024-11-20 08:51:31.575647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.701 [2024-11-20 08:51:31.590375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.701 08:51:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:00.701 [2024-11-20 08:51:31.597554] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.078 "name": "raid_bdev1", 00:18:02.078 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:02.078 "strip_size_kb": 64, 00:18:02.078 "state": "online", 00:18:02.078 "raid_level": "raid5f", 00:18:02.078 "superblock": false, 00:18:02.078 "num_base_bdevs": 3, 00:18:02.078 "num_base_bdevs_discovered": 3, 00:18:02.078 "num_base_bdevs_operational": 3, 00:18:02.078 "process": { 00:18:02.078 "type": "rebuild", 00:18:02.078 "target": "spare", 00:18:02.078 "progress": { 00:18:02.078 "blocks": 18432, 00:18:02.078 "percent": 14 00:18:02.078 } 00:18:02.078 }, 00:18:02.078 "base_bdevs_list": [ 00:18:02.078 { 00:18:02.078 "name": "spare", 00:18:02.078 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:02.078 "is_configured": true, 00:18:02.078 "data_offset": 0, 00:18:02.078 "data_size": 65536 00:18:02.078 }, 00:18:02.078 { 00:18:02.078 "name": "BaseBdev2", 00:18:02.078 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:02.078 "is_configured": true, 00:18:02.078 "data_offset": 0, 00:18:02.078 "data_size": 65536 00:18:02.078 }, 00:18:02.078 { 00:18:02.078 "name": "BaseBdev3", 00:18:02.078 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:02.078 "is_configured": true, 00:18:02.078 "data_offset": 0, 00:18:02.078 "data_size": 65536 00:18:02.078 } 00:18:02.078 ] 00:18:02.078 }' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=593 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.078 "name": "raid_bdev1", 00:18:02.078 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:02.078 "strip_size_kb": 64, 00:18:02.078 "state": "online", 00:18:02.078 "raid_level": "raid5f", 00:18:02.078 "superblock": false, 00:18:02.078 "num_base_bdevs": 3, 00:18:02.078 "num_base_bdevs_discovered": 3, 00:18:02.078 "num_base_bdevs_operational": 3, 00:18:02.078 "process": { 00:18:02.078 "type": "rebuild", 00:18:02.078 "target": "spare", 00:18:02.078 "progress": { 00:18:02.078 "blocks": 22528, 00:18:02.078 "percent": 17 00:18:02.078 } 00:18:02.078 }, 00:18:02.078 "base_bdevs_list": [ 00:18:02.078 { 00:18:02.078 "name": "spare", 00:18:02.078 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:02.078 "is_configured": true, 00:18:02.078 "data_offset": 0, 00:18:02.078 "data_size": 65536 00:18:02.078 }, 00:18:02.078 { 00:18:02.078 "name": "BaseBdev2", 00:18:02.078 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:02.078 "is_configured": true, 00:18:02.078 "data_offset": 0, 00:18:02.078 "data_size": 65536 00:18:02.078 }, 00:18:02.078 { 00:18:02.078 "name": "BaseBdev3", 00:18:02.078 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:02.078 "is_configured": true, 00:18:02.078 "data_offset": 0, 00:18:02.078 "data_size": 65536 00:18:02.078 } 00:18:02.078 ] 00:18:02.078 }' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.078 08:51:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.014 08:51:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.272 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.272 "name": "raid_bdev1", 00:18:03.272 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:03.272 "strip_size_kb": 64, 00:18:03.272 "state": "online", 00:18:03.272 "raid_level": "raid5f", 00:18:03.272 "superblock": false, 00:18:03.272 "num_base_bdevs": 3, 00:18:03.272 "num_base_bdevs_discovered": 3, 00:18:03.272 "num_base_bdevs_operational": 3, 00:18:03.272 "process": { 00:18:03.272 "type": "rebuild", 00:18:03.272 "target": "spare", 00:18:03.273 "progress": { 00:18:03.273 "blocks": 45056, 00:18:03.273 "percent": 34 00:18:03.273 } 00:18:03.273 }, 00:18:03.273 "base_bdevs_list": [ 00:18:03.273 { 00:18:03.273 "name": "spare", 00:18:03.273 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:03.273 "is_configured": true, 00:18:03.273 "data_offset": 0, 00:18:03.273 "data_size": 65536 00:18:03.273 }, 00:18:03.273 { 00:18:03.273 "name": "BaseBdev2", 00:18:03.273 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:03.273 "is_configured": true, 00:18:03.273 "data_offset": 0, 00:18:03.273 "data_size": 65536 00:18:03.273 }, 00:18:03.273 { 00:18:03.273 "name": "BaseBdev3", 00:18:03.273 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:03.273 "is_configured": true, 00:18:03.273 "data_offset": 0, 00:18:03.273 "data_size": 65536 00:18:03.273 } 00:18:03.273 ] 00:18:03.273 }' 00:18:03.273 08:51:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.273 08:51:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.273 08:51:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.273 08:51:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.273 08:51:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.208 "name": "raid_bdev1", 00:18:04.208 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:04.208 "strip_size_kb": 64, 00:18:04.208 "state": "online", 00:18:04.208 "raid_level": "raid5f", 00:18:04.208 "superblock": false, 00:18:04.208 "num_base_bdevs": 3, 00:18:04.208 "num_base_bdevs_discovered": 3, 00:18:04.208 "num_base_bdevs_operational": 3, 00:18:04.208 "process": { 00:18:04.208 "type": "rebuild", 00:18:04.208 "target": "spare", 00:18:04.208 "progress": { 00:18:04.208 "blocks": 69632, 00:18:04.208 "percent": 53 00:18:04.208 } 00:18:04.208 }, 00:18:04.208 "base_bdevs_list": [ 00:18:04.208 { 00:18:04.208 "name": "spare", 00:18:04.208 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:04.208 "is_configured": true, 00:18:04.208 "data_offset": 0, 00:18:04.208 "data_size": 65536 00:18:04.208 }, 00:18:04.208 { 00:18:04.208 "name": "BaseBdev2", 00:18:04.208 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:04.208 "is_configured": true, 00:18:04.208 "data_offset": 0, 00:18:04.208 "data_size": 65536 00:18:04.208 }, 00:18:04.208 { 00:18:04.208 "name": "BaseBdev3", 00:18:04.208 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:04.208 "is_configured": true, 00:18:04.208 "data_offset": 0, 00:18:04.208 "data_size": 65536 00:18:04.208 } 00:18:04.208 ] 00:18:04.208 }' 00:18:04.208 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.467 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.467 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.467 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.467 08:51:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.402 "name": "raid_bdev1", 00:18:05.402 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:05.402 "strip_size_kb": 64, 00:18:05.402 "state": "online", 00:18:05.402 "raid_level": "raid5f", 00:18:05.402 "superblock": false, 00:18:05.402 "num_base_bdevs": 3, 00:18:05.402 "num_base_bdevs_discovered": 3, 00:18:05.402 "num_base_bdevs_operational": 3, 00:18:05.402 "process": { 00:18:05.402 "type": "rebuild", 00:18:05.402 "target": "spare", 00:18:05.402 "progress": { 00:18:05.402 "blocks": 92160, 00:18:05.402 "percent": 70 00:18:05.402 } 00:18:05.402 }, 00:18:05.402 "base_bdevs_list": [ 00:18:05.402 { 00:18:05.402 "name": "spare", 00:18:05.402 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:05.402 "is_configured": true, 00:18:05.402 "data_offset": 0, 00:18:05.402 "data_size": 65536 00:18:05.402 }, 00:18:05.402 { 00:18:05.402 "name": "BaseBdev2", 00:18:05.402 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:05.402 "is_configured": true, 00:18:05.402 "data_offset": 0, 00:18:05.402 "data_size": 65536 00:18:05.402 }, 00:18:05.402 { 00:18:05.402 "name": "BaseBdev3", 00:18:05.402 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:05.402 "is_configured": true, 00:18:05.402 "data_offset": 0, 00:18:05.402 "data_size": 65536 00:18:05.402 } 00:18:05.402 ] 00:18:05.402 }' 00:18:05.402 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.660 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.660 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.660 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.660 08:51:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.595 "name": "raid_bdev1", 00:18:06.595 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:06.595 "strip_size_kb": 64, 00:18:06.595 "state": "online", 00:18:06.595 "raid_level": "raid5f", 00:18:06.595 "superblock": false, 00:18:06.595 "num_base_bdevs": 3, 00:18:06.595 "num_base_bdevs_discovered": 3, 00:18:06.595 "num_base_bdevs_operational": 3, 00:18:06.595 "process": { 00:18:06.595 "type": "rebuild", 00:18:06.595 "target": "spare", 00:18:06.595 "progress": { 00:18:06.595 "blocks": 116736, 00:18:06.595 "percent": 89 00:18:06.595 } 00:18:06.595 }, 00:18:06.595 "base_bdevs_list": [ 00:18:06.595 { 00:18:06.595 "name": "spare", 00:18:06.595 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:06.595 "is_configured": true, 00:18:06.595 "data_offset": 0, 00:18:06.595 "data_size": 65536 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "name": "BaseBdev2", 00:18:06.595 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:06.595 "is_configured": true, 00:18:06.595 "data_offset": 0, 00:18:06.595 "data_size": 65536 00:18:06.595 }, 00:18:06.595 { 00:18:06.595 "name": "BaseBdev3", 00:18:06.595 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:06.595 "is_configured": true, 00:18:06.595 "data_offset": 0, 00:18:06.595 "data_size": 65536 00:18:06.595 } 00:18:06.595 ] 00:18:06.595 }' 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.595 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.854 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.854 08:51:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:07.421 [2024-11-20 08:51:38.067546] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:07.421 [2024-11-20 08:51:38.067852] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:07.421 [2024-11-20 08:51:38.067928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.679 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.679 "name": "raid_bdev1", 00:18:07.679 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:07.680 "strip_size_kb": 64, 00:18:07.680 "state": "online", 00:18:07.680 "raid_level": "raid5f", 00:18:07.680 "superblock": false, 00:18:07.680 "num_base_bdevs": 3, 00:18:07.680 "num_base_bdevs_discovered": 3, 00:18:07.680 "num_base_bdevs_operational": 3, 00:18:07.680 "base_bdevs_list": [ 00:18:07.680 { 00:18:07.680 "name": "spare", 00:18:07.680 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:07.680 "is_configured": true, 00:18:07.680 "data_offset": 0, 00:18:07.680 "data_size": 65536 00:18:07.680 }, 00:18:07.680 { 00:18:07.680 "name": "BaseBdev2", 00:18:07.680 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:07.680 "is_configured": true, 00:18:07.680 "data_offset": 0, 00:18:07.680 "data_size": 65536 00:18:07.680 }, 00:18:07.680 { 00:18:07.680 "name": "BaseBdev3", 00:18:07.680 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:07.680 "is_configured": true, 00:18:07.680 "data_offset": 0, 00:18:07.680 "data_size": 65536 00:18:07.680 } 00:18:07.680 ] 00:18:07.680 }' 00:18:07.680 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.939 "name": "raid_bdev1", 00:18:07.939 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:07.939 "strip_size_kb": 64, 00:18:07.939 "state": "online", 00:18:07.939 "raid_level": "raid5f", 00:18:07.939 "superblock": false, 00:18:07.939 "num_base_bdevs": 3, 00:18:07.939 "num_base_bdevs_discovered": 3, 00:18:07.939 "num_base_bdevs_operational": 3, 00:18:07.939 "base_bdevs_list": [ 00:18:07.939 { 00:18:07.939 "name": "spare", 00:18:07.939 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:07.939 "is_configured": true, 00:18:07.939 "data_offset": 0, 00:18:07.939 "data_size": 65536 00:18:07.939 }, 00:18:07.939 { 00:18:07.939 "name": "BaseBdev2", 00:18:07.939 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:07.939 "is_configured": true, 00:18:07.939 "data_offset": 0, 00:18:07.939 "data_size": 65536 00:18:07.939 }, 00:18:07.939 { 00:18:07.939 "name": "BaseBdev3", 00:18:07.939 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:07.939 "is_configured": true, 00:18:07.939 "data_offset": 0, 00:18:07.939 "data_size": 65536 00:18:07.939 } 00:18:07.939 ] 00:18:07.939 }' 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.939 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.197 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.197 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.197 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.197 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.197 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.197 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.197 "name": "raid_bdev1", 00:18:08.197 "uuid": "4ef7c830-ae1f-43dc-a63b-6a6977e27195", 00:18:08.197 "strip_size_kb": 64, 00:18:08.197 "state": "online", 00:18:08.197 "raid_level": "raid5f", 00:18:08.197 "superblock": false, 00:18:08.197 "num_base_bdevs": 3, 00:18:08.197 "num_base_bdevs_discovered": 3, 00:18:08.197 "num_base_bdevs_operational": 3, 00:18:08.197 "base_bdevs_list": [ 00:18:08.197 { 00:18:08.197 "name": "spare", 00:18:08.197 "uuid": "f76805f0-fed7-5b68-9351-615f5c15ab7c", 00:18:08.197 "is_configured": true, 00:18:08.197 "data_offset": 0, 00:18:08.197 "data_size": 65536 00:18:08.197 }, 00:18:08.197 { 00:18:08.197 "name": "BaseBdev2", 00:18:08.197 "uuid": "cc999a4f-4e33-5b45-a3cc-e115ffaca7e1", 00:18:08.197 "is_configured": true, 00:18:08.197 "data_offset": 0, 00:18:08.197 "data_size": 65536 00:18:08.197 }, 00:18:08.197 { 00:18:08.197 "name": "BaseBdev3", 00:18:08.197 "uuid": "c21eb63c-96a0-5439-850b-61e423d4a88c", 00:18:08.197 "is_configured": true, 00:18:08.197 "data_offset": 0, 00:18:08.197 "data_size": 65536 00:18:08.197 } 00:18:08.197 ] 00:18:08.197 }' 00:18:08.198 08:51:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.198 08:51:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.455 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:08.455 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.455 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.455 [2024-11-20 08:51:39.362834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.455 [2024-11-20 08:51:39.363007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.455 [2024-11-20 08:51:39.363136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.455 [2024-11-20 08:51:39.363265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.455 [2024-11-20 08:51:39.363292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:08.455 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.713 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:08.972 /dev/nbd0 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.972 1+0 records in 00:18:08.972 1+0 records out 00:18:08.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228132 s, 18.0 MB/s 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.972 08:51:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:09.231 /dev/nbd1 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.231 1+0 records in 00:18:09.231 1+0 records out 00:18:09.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398103 s, 10.3 MB/s 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:09.231 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.490 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.749 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81953 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81953 ']' 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81953 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81953 00:18:10.008 killing process with pid 81953 00:18:10.008 Received shutdown signal, test time was about 60.000000 seconds 00:18:10.008 00:18:10.008 Latency(us) 00:18:10.008 [2024-11-20T08:51:40.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.008 [2024-11-20T08:51:40.924Z] =================================================================================================================== 00:18:10.008 [2024-11-20T08:51:40.924Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81953' 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81953 00:18:10.008 [2024-11-20 08:51:40.844788] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.008 08:51:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81953 00:18:10.574 [2024-11-20 08:51:41.188016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:11.510 00:18:11.510 real 0m16.171s 00:18:11.510 user 0m20.607s 00:18:11.510 sys 0m2.003s 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.510 ************************************ 00:18:11.510 END TEST raid5f_rebuild_test 00:18:11.510 ************************************ 00:18:11.510 08:51:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:11.510 08:51:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:11.510 08:51:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.510 08:51:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.510 ************************************ 00:18:11.510 START TEST raid5f_rebuild_test_sb 00:18:11.510 ************************************ 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82401 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82401 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82401 ']' 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.510 08:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.510 [2024-11-20 08:51:42.347616] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:11.510 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:11.510 Zero copy mechanism will not be used. 00:18:11.510 [2024-11-20 08:51:42.347966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82401 ] 00:18:11.768 [2024-11-20 08:51:42.521860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.768 [2024-11-20 08:51:42.644301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.027 [2024-11-20 08:51:42.847310] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.027 [2024-11-20 08:51:42.847350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.595 BaseBdev1_malloc 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.595 [2024-11-20 08:51:43.344175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.595 [2024-11-20 08:51:43.344306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.595 [2024-11-20 08:51:43.344342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.595 [2024-11-20 08:51:43.344360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.595 [2024-11-20 08:51:43.347195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.595 [2024-11-20 08:51:43.347248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.595 BaseBdev1 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.595 BaseBdev2_malloc 00:18:12.595 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.596 [2024-11-20 08:51:43.396078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:12.596 [2024-11-20 08:51:43.396183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.596 [2024-11-20 08:51:43.396215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.596 [2024-11-20 08:51:43.396244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.596 [2024-11-20 08:51:43.398986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.596 [2024-11-20 08:51:43.399037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:12.596 BaseBdev2 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.596 BaseBdev3_malloc 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.596 [2024-11-20 08:51:43.454290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:12.596 [2024-11-20 08:51:43.454504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.596 [2024-11-20 08:51:43.454547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:12.596 [2024-11-20 08:51:43.454569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.596 [2024-11-20 08:51:43.457321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.596 [2024-11-20 08:51:43.457377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:12.596 BaseBdev3 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.596 spare_malloc 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.596 spare_delay 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.596 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.855 [2024-11-20 08:51:43.510737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.855 [2024-11-20 08:51:43.510936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.855 [2024-11-20 08:51:43.510973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:12.855 [2024-11-20 08:51:43.510993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.855 [2024-11-20 08:51:43.513830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.855 [2024-11-20 08:51:43.513888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.855 spare 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.855 [2024-11-20 08:51:43.518872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.855 [2024-11-20 08:51:43.521276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.855 [2024-11-20 08:51:43.521516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.855 [2024-11-20 08:51:43.521775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.855 [2024-11-20 08:51:43.521797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:12.855 [2024-11-20 08:51:43.522133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.855 [2024-11-20 08:51:43.527270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.855 [2024-11-20 08:51:43.527303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.855 [2024-11-20 08:51:43.527554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.855 "name": "raid_bdev1", 00:18:12.855 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:12.855 "strip_size_kb": 64, 00:18:12.855 "state": "online", 00:18:12.855 "raid_level": "raid5f", 00:18:12.855 "superblock": true, 00:18:12.855 "num_base_bdevs": 3, 00:18:12.855 "num_base_bdevs_discovered": 3, 00:18:12.855 "num_base_bdevs_operational": 3, 00:18:12.855 "base_bdevs_list": [ 00:18:12.855 { 00:18:12.855 "name": "BaseBdev1", 00:18:12.855 "uuid": "67b27dbf-1787-5cb4-b0c1-04c76bfb83cf", 00:18:12.855 "is_configured": true, 00:18:12.855 "data_offset": 2048, 00:18:12.855 "data_size": 63488 00:18:12.855 }, 00:18:12.855 { 00:18:12.855 "name": "BaseBdev2", 00:18:12.855 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:12.855 "is_configured": true, 00:18:12.855 "data_offset": 2048, 00:18:12.855 "data_size": 63488 00:18:12.855 }, 00:18:12.855 { 00:18:12.855 "name": "BaseBdev3", 00:18:12.855 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:12.855 "is_configured": true, 00:18:12.855 "data_offset": 2048, 00:18:12.855 "data_size": 63488 00:18:12.855 } 00:18:12.855 ] 00:18:12.855 }' 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.855 08:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.422 [2024-11-20 08:51:44.045535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.422 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:13.681 [2024-11-20 08:51:44.433445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:13.681 /dev/nbd0 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.681 1+0 records in 00:18:13.681 1+0 records out 00:18:13.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256349 s, 16.0 MB/s 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:13.681 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:14.249 496+0 records in 00:18:14.249 496+0 records out 00:18:14.249 65011712 bytes (65 MB, 62 MiB) copied, 0.466596 s, 139 MB/s 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.249 08:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.508 [2024-11-20 08:51:45.265287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.508 [2024-11-20 08:51:45.283224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.508 "name": "raid_bdev1", 00:18:14.508 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:14.508 "strip_size_kb": 64, 00:18:14.508 "state": "online", 00:18:14.508 "raid_level": "raid5f", 00:18:14.508 "superblock": true, 00:18:14.508 "num_base_bdevs": 3, 00:18:14.508 "num_base_bdevs_discovered": 2, 00:18:14.508 "num_base_bdevs_operational": 2, 00:18:14.508 "base_bdevs_list": [ 00:18:14.508 { 00:18:14.508 "name": null, 00:18:14.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.508 "is_configured": false, 00:18:14.508 "data_offset": 0, 00:18:14.508 "data_size": 63488 00:18:14.508 }, 00:18:14.508 { 00:18:14.508 "name": "BaseBdev2", 00:18:14.508 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:14.508 "is_configured": true, 00:18:14.508 "data_offset": 2048, 00:18:14.508 "data_size": 63488 00:18:14.508 }, 00:18:14.508 { 00:18:14.508 "name": "BaseBdev3", 00:18:14.508 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:14.508 "is_configured": true, 00:18:14.508 "data_offset": 2048, 00:18:14.508 "data_size": 63488 00:18:14.508 } 00:18:14.508 ] 00:18:14.508 }' 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.508 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.075 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.075 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.075 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.076 [2024-11-20 08:51:45.799352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.076 [2024-11-20 08:51:45.814723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:15.076 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.076 08:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:15.076 [2024-11-20 08:51:45.822169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.012 "name": "raid_bdev1", 00:18:16.012 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:16.012 "strip_size_kb": 64, 00:18:16.012 "state": "online", 00:18:16.012 "raid_level": "raid5f", 00:18:16.012 "superblock": true, 00:18:16.012 "num_base_bdevs": 3, 00:18:16.012 "num_base_bdevs_discovered": 3, 00:18:16.012 "num_base_bdevs_operational": 3, 00:18:16.012 "process": { 00:18:16.012 "type": "rebuild", 00:18:16.012 "target": "spare", 00:18:16.012 "progress": { 00:18:16.012 "blocks": 18432, 00:18:16.012 "percent": 14 00:18:16.012 } 00:18:16.012 }, 00:18:16.012 "base_bdevs_list": [ 00:18:16.012 { 00:18:16.012 "name": "spare", 00:18:16.012 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:16.012 "is_configured": true, 00:18:16.012 "data_offset": 2048, 00:18:16.012 "data_size": 63488 00:18:16.012 }, 00:18:16.012 { 00:18:16.012 "name": "BaseBdev2", 00:18:16.012 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:16.012 "is_configured": true, 00:18:16.012 "data_offset": 2048, 00:18:16.012 "data_size": 63488 00:18:16.012 }, 00:18:16.012 { 00:18:16.012 "name": "BaseBdev3", 00:18:16.012 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:16.012 "is_configured": true, 00:18:16.012 "data_offset": 2048, 00:18:16.012 "data_size": 63488 00:18:16.012 } 00:18:16.012 ] 00:18:16.012 }' 00:18:16.012 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.271 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.271 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.271 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.271 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.271 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.271 08:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.271 [2024-11-20 08:51:46.987726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.271 [2024-11-20 08:51:47.034711] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.271 [2024-11-20 08:51:47.034795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.271 [2024-11-20 08:51:47.034827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.271 [2024-11-20 08:51:47.034841] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.271 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.272 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.272 "name": "raid_bdev1", 00:18:16.272 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:16.272 "strip_size_kb": 64, 00:18:16.272 "state": "online", 00:18:16.272 "raid_level": "raid5f", 00:18:16.272 "superblock": true, 00:18:16.272 "num_base_bdevs": 3, 00:18:16.272 "num_base_bdevs_discovered": 2, 00:18:16.272 "num_base_bdevs_operational": 2, 00:18:16.272 "base_bdevs_list": [ 00:18:16.272 { 00:18:16.272 "name": null, 00:18:16.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.272 "is_configured": false, 00:18:16.272 "data_offset": 0, 00:18:16.272 "data_size": 63488 00:18:16.272 }, 00:18:16.272 { 00:18:16.272 "name": "BaseBdev2", 00:18:16.272 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:16.272 "is_configured": true, 00:18:16.272 "data_offset": 2048, 00:18:16.272 "data_size": 63488 00:18:16.272 }, 00:18:16.272 { 00:18:16.272 "name": "BaseBdev3", 00:18:16.272 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:16.272 "is_configured": true, 00:18:16.272 "data_offset": 2048, 00:18:16.272 "data_size": 63488 00:18:16.272 } 00:18:16.272 ] 00:18:16.272 }' 00:18:16.272 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.272 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.842 "name": "raid_bdev1", 00:18:16.842 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:16.842 "strip_size_kb": 64, 00:18:16.842 "state": "online", 00:18:16.842 "raid_level": "raid5f", 00:18:16.842 "superblock": true, 00:18:16.842 "num_base_bdevs": 3, 00:18:16.842 "num_base_bdevs_discovered": 2, 00:18:16.842 "num_base_bdevs_operational": 2, 00:18:16.842 "base_bdevs_list": [ 00:18:16.842 { 00:18:16.842 "name": null, 00:18:16.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.842 "is_configured": false, 00:18:16.842 "data_offset": 0, 00:18:16.842 "data_size": 63488 00:18:16.842 }, 00:18:16.842 { 00:18:16.842 "name": "BaseBdev2", 00:18:16.842 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:16.842 "is_configured": true, 00:18:16.842 "data_offset": 2048, 00:18:16.842 "data_size": 63488 00:18:16.842 }, 00:18:16.842 { 00:18:16.842 "name": "BaseBdev3", 00:18:16.842 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:16.842 "is_configured": true, 00:18:16.842 "data_offset": 2048, 00:18:16.842 "data_size": 63488 00:18:16.842 } 00:18:16.842 ] 00:18:16.842 }' 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.842 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.842 [2024-11-20 08:51:47.745699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.101 [2024-11-20 08:51:47.760721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:17.101 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.101 08:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:17.101 [2024-11-20 08:51:47.768104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.051 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.051 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.052 "name": "raid_bdev1", 00:18:18.052 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:18.052 "strip_size_kb": 64, 00:18:18.052 "state": "online", 00:18:18.052 "raid_level": "raid5f", 00:18:18.052 "superblock": true, 00:18:18.052 "num_base_bdevs": 3, 00:18:18.052 "num_base_bdevs_discovered": 3, 00:18:18.052 "num_base_bdevs_operational": 3, 00:18:18.052 "process": { 00:18:18.052 "type": "rebuild", 00:18:18.052 "target": "spare", 00:18:18.052 "progress": { 00:18:18.052 "blocks": 18432, 00:18:18.052 "percent": 14 00:18:18.052 } 00:18:18.052 }, 00:18:18.052 "base_bdevs_list": [ 00:18:18.052 { 00:18:18.052 "name": "spare", 00:18:18.052 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:18.052 "is_configured": true, 00:18:18.052 "data_offset": 2048, 00:18:18.052 "data_size": 63488 00:18:18.052 }, 00:18:18.052 { 00:18:18.052 "name": "BaseBdev2", 00:18:18.052 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:18.052 "is_configured": true, 00:18:18.052 "data_offset": 2048, 00:18:18.052 "data_size": 63488 00:18:18.052 }, 00:18:18.052 { 00:18:18.052 "name": "BaseBdev3", 00:18:18.052 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:18.052 "is_configured": true, 00:18:18.052 "data_offset": 2048, 00:18:18.052 "data_size": 63488 00:18:18.052 } 00:18:18.052 ] 00:18:18.052 }' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:18.052 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.052 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.317 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.317 "name": "raid_bdev1", 00:18:18.317 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:18.317 "strip_size_kb": 64, 00:18:18.317 "state": "online", 00:18:18.317 "raid_level": "raid5f", 00:18:18.317 "superblock": true, 00:18:18.317 "num_base_bdevs": 3, 00:18:18.317 "num_base_bdevs_discovered": 3, 00:18:18.317 "num_base_bdevs_operational": 3, 00:18:18.317 "process": { 00:18:18.317 "type": "rebuild", 00:18:18.317 "target": "spare", 00:18:18.317 "progress": { 00:18:18.317 "blocks": 22528, 00:18:18.317 "percent": 17 00:18:18.317 } 00:18:18.317 }, 00:18:18.317 "base_bdevs_list": [ 00:18:18.317 { 00:18:18.317 "name": "spare", 00:18:18.317 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:18.317 "is_configured": true, 00:18:18.317 "data_offset": 2048, 00:18:18.317 "data_size": 63488 00:18:18.317 }, 00:18:18.317 { 00:18:18.317 "name": "BaseBdev2", 00:18:18.317 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:18.317 "is_configured": true, 00:18:18.317 "data_offset": 2048, 00:18:18.317 "data_size": 63488 00:18:18.317 }, 00:18:18.317 { 00:18:18.317 "name": "BaseBdev3", 00:18:18.317 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:18.317 "is_configured": true, 00:18:18.317 "data_offset": 2048, 00:18:18.317 "data_size": 63488 00:18:18.317 } 00:18:18.317 ] 00:18:18.317 }' 00:18:18.317 08:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.317 08:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.317 08:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.317 08:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.317 08:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.280 "name": "raid_bdev1", 00:18:19.280 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:19.280 "strip_size_kb": 64, 00:18:19.280 "state": "online", 00:18:19.280 "raid_level": "raid5f", 00:18:19.280 "superblock": true, 00:18:19.280 "num_base_bdevs": 3, 00:18:19.280 "num_base_bdevs_discovered": 3, 00:18:19.280 "num_base_bdevs_operational": 3, 00:18:19.280 "process": { 00:18:19.280 "type": "rebuild", 00:18:19.280 "target": "spare", 00:18:19.280 "progress": { 00:18:19.280 "blocks": 47104, 00:18:19.280 "percent": 37 00:18:19.280 } 00:18:19.280 }, 00:18:19.280 "base_bdevs_list": [ 00:18:19.280 { 00:18:19.280 "name": "spare", 00:18:19.280 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 2048, 00:18:19.280 "data_size": 63488 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev2", 00:18:19.280 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 2048, 00:18:19.280 "data_size": 63488 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev3", 00:18:19.280 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 2048, 00:18:19.280 "data_size": 63488 00:18:19.280 } 00:18:19.280 ] 00:18:19.280 }' 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.280 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.537 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.537 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.537 08:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.472 "name": "raid_bdev1", 00:18:20.472 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:20.472 "strip_size_kb": 64, 00:18:20.472 "state": "online", 00:18:20.472 "raid_level": "raid5f", 00:18:20.472 "superblock": true, 00:18:20.472 "num_base_bdevs": 3, 00:18:20.472 "num_base_bdevs_discovered": 3, 00:18:20.472 "num_base_bdevs_operational": 3, 00:18:20.472 "process": { 00:18:20.472 "type": "rebuild", 00:18:20.472 "target": "spare", 00:18:20.472 "progress": { 00:18:20.472 "blocks": 69632, 00:18:20.472 "percent": 54 00:18:20.472 } 00:18:20.472 }, 00:18:20.472 "base_bdevs_list": [ 00:18:20.472 { 00:18:20.472 "name": "spare", 00:18:20.472 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:20.472 "is_configured": true, 00:18:20.472 "data_offset": 2048, 00:18:20.472 "data_size": 63488 00:18:20.472 }, 00:18:20.472 { 00:18:20.472 "name": "BaseBdev2", 00:18:20.472 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:20.472 "is_configured": true, 00:18:20.472 "data_offset": 2048, 00:18:20.472 "data_size": 63488 00:18:20.472 }, 00:18:20.472 { 00:18:20.472 "name": "BaseBdev3", 00:18:20.472 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:20.472 "is_configured": true, 00:18:20.472 "data_offset": 2048, 00:18:20.472 "data_size": 63488 00:18:20.472 } 00:18:20.472 ] 00:18:20.472 }' 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.472 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.729 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.729 08:51:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.661 "name": "raid_bdev1", 00:18:21.661 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:21.661 "strip_size_kb": 64, 00:18:21.661 "state": "online", 00:18:21.661 "raid_level": "raid5f", 00:18:21.661 "superblock": true, 00:18:21.661 "num_base_bdevs": 3, 00:18:21.661 "num_base_bdevs_discovered": 3, 00:18:21.661 "num_base_bdevs_operational": 3, 00:18:21.661 "process": { 00:18:21.661 "type": "rebuild", 00:18:21.661 "target": "spare", 00:18:21.661 "progress": { 00:18:21.661 "blocks": 92160, 00:18:21.661 "percent": 72 00:18:21.661 } 00:18:21.661 }, 00:18:21.661 "base_bdevs_list": [ 00:18:21.661 { 00:18:21.661 "name": "spare", 00:18:21.661 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:21.661 "is_configured": true, 00:18:21.661 "data_offset": 2048, 00:18:21.661 "data_size": 63488 00:18:21.661 }, 00:18:21.661 { 00:18:21.661 "name": "BaseBdev2", 00:18:21.661 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:21.661 "is_configured": true, 00:18:21.661 "data_offset": 2048, 00:18:21.661 "data_size": 63488 00:18:21.661 }, 00:18:21.661 { 00:18:21.661 "name": "BaseBdev3", 00:18:21.661 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:21.661 "is_configured": true, 00:18:21.661 "data_offset": 2048, 00:18:21.661 "data_size": 63488 00:18:21.661 } 00:18:21.661 ] 00:18:21.661 }' 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.661 08:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.037 "name": "raid_bdev1", 00:18:23.037 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:23.037 "strip_size_kb": 64, 00:18:23.037 "state": "online", 00:18:23.037 "raid_level": "raid5f", 00:18:23.037 "superblock": true, 00:18:23.037 "num_base_bdevs": 3, 00:18:23.037 "num_base_bdevs_discovered": 3, 00:18:23.037 "num_base_bdevs_operational": 3, 00:18:23.037 "process": { 00:18:23.037 "type": "rebuild", 00:18:23.037 "target": "spare", 00:18:23.037 "progress": { 00:18:23.037 "blocks": 116736, 00:18:23.037 "percent": 91 00:18:23.037 } 00:18:23.037 }, 00:18:23.037 "base_bdevs_list": [ 00:18:23.037 { 00:18:23.037 "name": "spare", 00:18:23.037 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:23.037 "is_configured": true, 00:18:23.037 "data_offset": 2048, 00:18:23.037 "data_size": 63488 00:18:23.037 }, 00:18:23.037 { 00:18:23.037 "name": "BaseBdev2", 00:18:23.037 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:23.037 "is_configured": true, 00:18:23.037 "data_offset": 2048, 00:18:23.037 "data_size": 63488 00:18:23.037 }, 00:18:23.037 { 00:18:23.037 "name": "BaseBdev3", 00:18:23.037 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:23.037 "is_configured": true, 00:18:23.037 "data_offset": 2048, 00:18:23.037 "data_size": 63488 00:18:23.037 } 00:18:23.037 ] 00:18:23.037 }' 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.037 08:51:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.297 [2024-11-20 08:51:54.035206] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:23.297 [2024-11-20 08:51:54.035676] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:23.297 [2024-11-20 08:51:54.035866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.934 "name": "raid_bdev1", 00:18:23.934 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:23.934 "strip_size_kb": 64, 00:18:23.934 "state": "online", 00:18:23.934 "raid_level": "raid5f", 00:18:23.934 "superblock": true, 00:18:23.934 "num_base_bdevs": 3, 00:18:23.934 "num_base_bdevs_discovered": 3, 00:18:23.934 "num_base_bdevs_operational": 3, 00:18:23.934 "base_bdevs_list": [ 00:18:23.934 { 00:18:23.934 "name": "spare", 00:18:23.934 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:23.934 "is_configured": true, 00:18:23.934 "data_offset": 2048, 00:18:23.934 "data_size": 63488 00:18:23.934 }, 00:18:23.934 { 00:18:23.934 "name": "BaseBdev2", 00:18:23.934 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:23.934 "is_configured": true, 00:18:23.934 "data_offset": 2048, 00:18:23.934 "data_size": 63488 00:18:23.934 }, 00:18:23.934 { 00:18:23.934 "name": "BaseBdev3", 00:18:23.934 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:23.934 "is_configured": true, 00:18:23.934 "data_offset": 2048, 00:18:23.934 "data_size": 63488 00:18:23.934 } 00:18:23.934 ] 00:18:23.934 }' 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.934 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.194 "name": "raid_bdev1", 00:18:24.194 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:24.194 "strip_size_kb": 64, 00:18:24.194 "state": "online", 00:18:24.194 "raid_level": "raid5f", 00:18:24.194 "superblock": true, 00:18:24.194 "num_base_bdevs": 3, 00:18:24.194 "num_base_bdevs_discovered": 3, 00:18:24.194 "num_base_bdevs_operational": 3, 00:18:24.194 "base_bdevs_list": [ 00:18:24.194 { 00:18:24.194 "name": "spare", 00:18:24.194 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:24.194 "is_configured": true, 00:18:24.194 "data_offset": 2048, 00:18:24.194 "data_size": 63488 00:18:24.194 }, 00:18:24.194 { 00:18:24.194 "name": "BaseBdev2", 00:18:24.194 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:24.194 "is_configured": true, 00:18:24.194 "data_offset": 2048, 00:18:24.194 "data_size": 63488 00:18:24.194 }, 00:18:24.194 { 00:18:24.194 "name": "BaseBdev3", 00:18:24.194 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:24.194 "is_configured": true, 00:18:24.194 "data_offset": 2048, 00:18:24.194 "data_size": 63488 00:18:24.194 } 00:18:24.194 ] 00:18:24.194 }' 00:18:24.194 08:51:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.194 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.455 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.455 "name": "raid_bdev1", 00:18:24.455 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:24.455 "strip_size_kb": 64, 00:18:24.455 "state": "online", 00:18:24.455 "raid_level": "raid5f", 00:18:24.455 "superblock": true, 00:18:24.455 "num_base_bdevs": 3, 00:18:24.455 "num_base_bdevs_discovered": 3, 00:18:24.455 "num_base_bdevs_operational": 3, 00:18:24.455 "base_bdevs_list": [ 00:18:24.455 { 00:18:24.455 "name": "spare", 00:18:24.455 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:24.455 "is_configured": true, 00:18:24.455 "data_offset": 2048, 00:18:24.455 "data_size": 63488 00:18:24.455 }, 00:18:24.455 { 00:18:24.455 "name": "BaseBdev2", 00:18:24.455 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:24.455 "is_configured": true, 00:18:24.455 "data_offset": 2048, 00:18:24.455 "data_size": 63488 00:18:24.455 }, 00:18:24.455 { 00:18:24.455 "name": "BaseBdev3", 00:18:24.455 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:24.455 "is_configured": true, 00:18:24.455 "data_offset": 2048, 00:18:24.455 "data_size": 63488 00:18:24.455 } 00:18:24.455 ] 00:18:24.455 }' 00:18:24.455 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.455 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.715 [2024-11-20 08:51:55.572346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.715 [2024-11-20 08:51:55.572518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.715 [2024-11-20 08:51:55.572649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.715 [2024-11-20 08:51:55.572762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.715 [2024-11-20 08:51:55.572793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.715 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.974 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.975 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:25.234 /dev/nbd0 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.234 1+0 records in 00:18:25.234 1+0 records out 00:18:25.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685096 s, 6.0 MB/s 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.234 08:51:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:25.493 /dev/nbd1 00:18:25.493 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:25.493 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.494 1+0 records in 00:18:25.494 1+0 records out 00:18:25.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288182 s, 14.2 MB/s 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.494 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.752 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:26.012 08:51:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.272 [2024-11-20 08:51:57.106020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:26.272 [2024-11-20 08:51:57.106280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.272 [2024-11-20 08:51:57.106324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:26.272 [2024-11-20 08:51:57.106344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.272 [2024-11-20 08:51:57.109462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.272 [2024-11-20 08:51:57.109686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:26.272 [2024-11-20 08:51:57.109818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:26.272 [2024-11-20 08:51:57.109899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.272 [2024-11-20 08:51:57.110088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.272 [2024-11-20 08:51:57.110340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.272 spare 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.272 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.531 [2024-11-20 08:51:57.210472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:26.531 [2024-11-20 08:51:57.210684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:26.531 [2024-11-20 08:51:57.211104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:26.531 [2024-11-20 08:51:57.216205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:26.531 [2024-11-20 08:51:57.216292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:26.531 [2024-11-20 08:51:57.216580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.531 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.532 "name": "raid_bdev1", 00:18:26.532 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:26.532 "strip_size_kb": 64, 00:18:26.532 "state": "online", 00:18:26.532 "raid_level": "raid5f", 00:18:26.532 "superblock": true, 00:18:26.532 "num_base_bdevs": 3, 00:18:26.532 "num_base_bdevs_discovered": 3, 00:18:26.532 "num_base_bdevs_operational": 3, 00:18:26.532 "base_bdevs_list": [ 00:18:26.532 { 00:18:26.532 "name": "spare", 00:18:26.532 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:26.532 "is_configured": true, 00:18:26.532 "data_offset": 2048, 00:18:26.532 "data_size": 63488 00:18:26.532 }, 00:18:26.532 { 00:18:26.532 "name": "BaseBdev2", 00:18:26.532 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:26.532 "is_configured": true, 00:18:26.532 "data_offset": 2048, 00:18:26.532 "data_size": 63488 00:18:26.532 }, 00:18:26.532 { 00:18:26.532 "name": "BaseBdev3", 00:18:26.532 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:26.532 "is_configured": true, 00:18:26.532 "data_offset": 2048, 00:18:26.532 "data_size": 63488 00:18:26.532 } 00:18:26.532 ] 00:18:26.532 }' 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.532 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.100 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.100 "name": "raid_bdev1", 00:18:27.100 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:27.100 "strip_size_kb": 64, 00:18:27.100 "state": "online", 00:18:27.100 "raid_level": "raid5f", 00:18:27.100 "superblock": true, 00:18:27.100 "num_base_bdevs": 3, 00:18:27.100 "num_base_bdevs_discovered": 3, 00:18:27.100 "num_base_bdevs_operational": 3, 00:18:27.100 "base_bdevs_list": [ 00:18:27.100 { 00:18:27.100 "name": "spare", 00:18:27.100 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:27.100 "is_configured": true, 00:18:27.100 "data_offset": 2048, 00:18:27.100 "data_size": 63488 00:18:27.100 }, 00:18:27.100 { 00:18:27.100 "name": "BaseBdev2", 00:18:27.100 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:27.100 "is_configured": true, 00:18:27.100 "data_offset": 2048, 00:18:27.100 "data_size": 63488 00:18:27.100 }, 00:18:27.100 { 00:18:27.100 "name": "BaseBdev3", 00:18:27.101 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:27.101 "is_configured": true, 00:18:27.101 "data_offset": 2048, 00:18:27.101 "data_size": 63488 00:18:27.101 } 00:18:27.101 ] 00:18:27.101 }' 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.101 [2024-11-20 08:51:57.958971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.101 08:51:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.360 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.360 "name": "raid_bdev1", 00:18:27.360 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:27.360 "strip_size_kb": 64, 00:18:27.360 "state": "online", 00:18:27.360 "raid_level": "raid5f", 00:18:27.360 "superblock": true, 00:18:27.360 "num_base_bdevs": 3, 00:18:27.360 "num_base_bdevs_discovered": 2, 00:18:27.360 "num_base_bdevs_operational": 2, 00:18:27.360 "base_bdevs_list": [ 00:18:27.360 { 00:18:27.360 "name": null, 00:18:27.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.360 "is_configured": false, 00:18:27.360 "data_offset": 0, 00:18:27.360 "data_size": 63488 00:18:27.360 }, 00:18:27.360 { 00:18:27.360 "name": "BaseBdev2", 00:18:27.360 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:27.360 "is_configured": true, 00:18:27.360 "data_offset": 2048, 00:18:27.360 "data_size": 63488 00:18:27.360 }, 00:18:27.360 { 00:18:27.360 "name": "BaseBdev3", 00:18:27.360 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:27.360 "is_configured": true, 00:18:27.360 "data_offset": 2048, 00:18:27.360 "data_size": 63488 00:18:27.360 } 00:18:27.360 ] 00:18:27.360 }' 00:18:27.360 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.360 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.619 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.619 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.619 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.619 [2024-11-20 08:51:58.459183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.619 [2024-11-20 08:51:58.459554] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.619 [2024-11-20 08:51:58.459738] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:27.619 [2024-11-20 08:51:58.459801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.619 [2024-11-20 08:51:58.474077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:27.619 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.619 08:51:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:27.619 [2024-11-20 08:51:58.481309] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.996 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.996 "name": "raid_bdev1", 00:18:28.996 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:28.996 "strip_size_kb": 64, 00:18:28.996 "state": "online", 00:18:28.996 "raid_level": "raid5f", 00:18:28.996 "superblock": true, 00:18:28.996 "num_base_bdevs": 3, 00:18:28.996 "num_base_bdevs_discovered": 3, 00:18:28.996 "num_base_bdevs_operational": 3, 00:18:28.996 "process": { 00:18:28.996 "type": "rebuild", 00:18:28.996 "target": "spare", 00:18:28.996 "progress": { 00:18:28.996 "blocks": 18432, 00:18:28.996 "percent": 14 00:18:28.996 } 00:18:28.996 }, 00:18:28.996 "base_bdevs_list": [ 00:18:28.996 { 00:18:28.996 "name": "spare", 00:18:28.996 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:28.996 "is_configured": true, 00:18:28.996 "data_offset": 2048, 00:18:28.996 "data_size": 63488 00:18:28.996 }, 00:18:28.996 { 00:18:28.996 "name": "BaseBdev2", 00:18:28.996 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:28.996 "is_configured": true, 00:18:28.996 "data_offset": 2048, 00:18:28.996 "data_size": 63488 00:18:28.996 }, 00:18:28.996 { 00:18:28.996 "name": "BaseBdev3", 00:18:28.997 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:28.997 "is_configured": true, 00:18:28.997 "data_offset": 2048, 00:18:28.997 "data_size": 63488 00:18:28.997 } 00:18:28.997 ] 00:18:28.997 }' 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.997 [2024-11-20 08:51:59.646796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.997 [2024-11-20 08:51:59.695003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.997 [2024-11-20 08:51:59.695097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.997 [2024-11-20 08:51:59.695122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.997 [2024-11-20 08:51:59.695138] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.997 "name": "raid_bdev1", 00:18:28.997 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:28.997 "strip_size_kb": 64, 00:18:28.997 "state": "online", 00:18:28.997 "raid_level": "raid5f", 00:18:28.997 "superblock": true, 00:18:28.997 "num_base_bdevs": 3, 00:18:28.997 "num_base_bdevs_discovered": 2, 00:18:28.997 "num_base_bdevs_operational": 2, 00:18:28.997 "base_bdevs_list": [ 00:18:28.997 { 00:18:28.997 "name": null, 00:18:28.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.997 "is_configured": false, 00:18:28.997 "data_offset": 0, 00:18:28.997 "data_size": 63488 00:18:28.997 }, 00:18:28.997 { 00:18:28.997 "name": "BaseBdev2", 00:18:28.997 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:28.997 "is_configured": true, 00:18:28.997 "data_offset": 2048, 00:18:28.997 "data_size": 63488 00:18:28.997 }, 00:18:28.997 { 00:18:28.997 "name": "BaseBdev3", 00:18:28.997 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:28.997 "is_configured": true, 00:18:28.997 "data_offset": 2048, 00:18:28.997 "data_size": 63488 00:18:28.997 } 00:18:28.997 ] 00:18:28.997 }' 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.997 08:51:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.566 08:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:29.566 08:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.566 08:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.566 [2024-11-20 08:52:00.242841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:29.566 [2024-11-20 08:52:00.243100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.566 [2024-11-20 08:52:00.243205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:29.566 [2024-11-20 08:52:00.243423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.566 [2024-11-20 08:52:00.244105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.566 [2024-11-20 08:52:00.244341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:29.566 [2024-11-20 08:52:00.244482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:29.566 [2024-11-20 08:52:00.244508] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:29.566 [2024-11-20 08:52:00.244523] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:29.566 [2024-11-20 08:52:00.244560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.566 [2024-11-20 08:52:00.258611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:29.566 spare 00:18:29.566 08:52:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.566 08:52:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:29.566 [2024-11-20 08:52:00.266015] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.500 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.500 "name": "raid_bdev1", 00:18:30.500 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:30.500 "strip_size_kb": 64, 00:18:30.500 "state": "online", 00:18:30.500 "raid_level": "raid5f", 00:18:30.500 "superblock": true, 00:18:30.500 "num_base_bdevs": 3, 00:18:30.500 "num_base_bdevs_discovered": 3, 00:18:30.500 "num_base_bdevs_operational": 3, 00:18:30.500 "process": { 00:18:30.500 "type": "rebuild", 00:18:30.500 "target": "spare", 00:18:30.500 "progress": { 00:18:30.500 "blocks": 18432, 00:18:30.500 "percent": 14 00:18:30.500 } 00:18:30.500 }, 00:18:30.500 "base_bdevs_list": [ 00:18:30.500 { 00:18:30.500 "name": "spare", 00:18:30.501 "uuid": "1e0dd311-71c7-545d-b095-f08d8e1a7a61", 00:18:30.501 "is_configured": true, 00:18:30.501 "data_offset": 2048, 00:18:30.501 "data_size": 63488 00:18:30.501 }, 00:18:30.501 { 00:18:30.501 "name": "BaseBdev2", 00:18:30.501 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:30.501 "is_configured": true, 00:18:30.501 "data_offset": 2048, 00:18:30.501 "data_size": 63488 00:18:30.501 }, 00:18:30.501 { 00:18:30.501 "name": "BaseBdev3", 00:18:30.501 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:30.501 "is_configured": true, 00:18:30.501 "data_offset": 2048, 00:18:30.501 "data_size": 63488 00:18:30.501 } 00:18:30.501 ] 00:18:30.501 }' 00:18:30.501 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.501 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.501 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.759 [2024-11-20 08:52:01.427797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.759 [2024-11-20 08:52:01.479052] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.759 [2024-11-20 08:52:01.479321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.759 [2024-11-20 08:52:01.479361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.759 [2024-11-20 08:52:01.479376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.759 "name": "raid_bdev1", 00:18:30.759 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:30.759 "strip_size_kb": 64, 00:18:30.759 "state": "online", 00:18:30.759 "raid_level": "raid5f", 00:18:30.759 "superblock": true, 00:18:30.759 "num_base_bdevs": 3, 00:18:30.759 "num_base_bdevs_discovered": 2, 00:18:30.759 "num_base_bdevs_operational": 2, 00:18:30.759 "base_bdevs_list": [ 00:18:30.759 { 00:18:30.759 "name": null, 00:18:30.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.759 "is_configured": false, 00:18:30.759 "data_offset": 0, 00:18:30.759 "data_size": 63488 00:18:30.759 }, 00:18:30.759 { 00:18:30.759 "name": "BaseBdev2", 00:18:30.759 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:30.759 "is_configured": true, 00:18:30.759 "data_offset": 2048, 00:18:30.759 "data_size": 63488 00:18:30.759 }, 00:18:30.759 { 00:18:30.759 "name": "BaseBdev3", 00:18:30.759 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:30.759 "is_configured": true, 00:18:30.759 "data_offset": 2048, 00:18:30.759 "data_size": 63488 00:18:30.759 } 00:18:30.759 ] 00:18:30.759 }' 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.759 08:52:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.327 "name": "raid_bdev1", 00:18:31.327 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:31.327 "strip_size_kb": 64, 00:18:31.327 "state": "online", 00:18:31.327 "raid_level": "raid5f", 00:18:31.327 "superblock": true, 00:18:31.327 "num_base_bdevs": 3, 00:18:31.327 "num_base_bdevs_discovered": 2, 00:18:31.327 "num_base_bdevs_operational": 2, 00:18:31.327 "base_bdevs_list": [ 00:18:31.327 { 00:18:31.327 "name": null, 00:18:31.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.327 "is_configured": false, 00:18:31.327 "data_offset": 0, 00:18:31.327 "data_size": 63488 00:18:31.327 }, 00:18:31.327 { 00:18:31.327 "name": "BaseBdev2", 00:18:31.327 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:31.327 "is_configured": true, 00:18:31.327 "data_offset": 2048, 00:18:31.327 "data_size": 63488 00:18:31.327 }, 00:18:31.327 { 00:18:31.327 "name": "BaseBdev3", 00:18:31.327 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:31.327 "is_configured": true, 00:18:31.327 "data_offset": 2048, 00:18:31.327 "data_size": 63488 00:18:31.327 } 00:18:31.327 ] 00:18:31.327 }' 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.327 [2024-11-20 08:52:02.193510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.327 [2024-11-20 08:52:02.193604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.327 [2024-11-20 08:52:02.193639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:31.327 [2024-11-20 08:52:02.193654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.327 [2024-11-20 08:52:02.194319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.327 [2024-11-20 08:52:02.194368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.327 [2024-11-20 08:52:02.194471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:31.327 [2024-11-20 08:52:02.194499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.327 [2024-11-20 08:52:02.194543] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:31.327 [2024-11-20 08:52:02.194571] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:31.327 BaseBdev1 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.327 08:52:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.705 "name": "raid_bdev1", 00:18:32.705 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:32.705 "strip_size_kb": 64, 00:18:32.705 "state": "online", 00:18:32.705 "raid_level": "raid5f", 00:18:32.705 "superblock": true, 00:18:32.705 "num_base_bdevs": 3, 00:18:32.705 "num_base_bdevs_discovered": 2, 00:18:32.705 "num_base_bdevs_operational": 2, 00:18:32.705 "base_bdevs_list": [ 00:18:32.705 { 00:18:32.705 "name": null, 00:18:32.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.705 "is_configured": false, 00:18:32.705 "data_offset": 0, 00:18:32.705 "data_size": 63488 00:18:32.705 }, 00:18:32.705 { 00:18:32.705 "name": "BaseBdev2", 00:18:32.705 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:32.705 "is_configured": true, 00:18:32.705 "data_offset": 2048, 00:18:32.705 "data_size": 63488 00:18:32.705 }, 00:18:32.705 { 00:18:32.705 "name": "BaseBdev3", 00:18:32.705 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:32.705 "is_configured": true, 00:18:32.705 "data_offset": 2048, 00:18:32.705 "data_size": 63488 00:18:32.705 } 00:18:32.705 ] 00:18:32.705 }' 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.705 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.964 "name": "raid_bdev1", 00:18:32.964 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:32.964 "strip_size_kb": 64, 00:18:32.964 "state": "online", 00:18:32.964 "raid_level": "raid5f", 00:18:32.964 "superblock": true, 00:18:32.964 "num_base_bdevs": 3, 00:18:32.964 "num_base_bdevs_discovered": 2, 00:18:32.964 "num_base_bdevs_operational": 2, 00:18:32.964 "base_bdevs_list": [ 00:18:32.964 { 00:18:32.964 "name": null, 00:18:32.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.964 "is_configured": false, 00:18:32.964 "data_offset": 0, 00:18:32.964 "data_size": 63488 00:18:32.964 }, 00:18:32.964 { 00:18:32.964 "name": "BaseBdev2", 00:18:32.964 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:32.964 "is_configured": true, 00:18:32.964 "data_offset": 2048, 00:18:32.964 "data_size": 63488 00:18:32.964 }, 00:18:32.964 { 00:18:32.964 "name": "BaseBdev3", 00:18:32.964 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:32.964 "is_configured": true, 00:18:32.964 "data_offset": 2048, 00:18:32.964 "data_size": 63488 00:18:32.964 } 00:18:32.964 ] 00:18:32.964 }' 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.964 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.223 [2024-11-20 08:52:03.898101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.223 [2024-11-20 08:52:03.898491] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.223 [2024-11-20 08:52:03.898525] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:33.223 request: 00:18:33.223 { 00:18:33.223 "base_bdev": "BaseBdev1", 00:18:33.223 "raid_bdev": "raid_bdev1", 00:18:33.223 "method": "bdev_raid_add_base_bdev", 00:18:33.223 "req_id": 1 00:18:33.223 } 00:18:33.223 Got JSON-RPC error response 00:18:33.223 response: 00:18:33.223 { 00:18:33.223 "code": -22, 00:18:33.223 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:33.223 } 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:33.223 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.224 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.224 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.224 08:52:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.160 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.160 "name": "raid_bdev1", 00:18:34.160 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:34.160 "strip_size_kb": 64, 00:18:34.160 "state": "online", 00:18:34.160 "raid_level": "raid5f", 00:18:34.160 "superblock": true, 00:18:34.160 "num_base_bdevs": 3, 00:18:34.160 "num_base_bdevs_discovered": 2, 00:18:34.160 "num_base_bdevs_operational": 2, 00:18:34.160 "base_bdevs_list": [ 00:18:34.160 { 00:18:34.160 "name": null, 00:18:34.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.160 "is_configured": false, 00:18:34.160 "data_offset": 0, 00:18:34.160 "data_size": 63488 00:18:34.160 }, 00:18:34.160 { 00:18:34.160 "name": "BaseBdev2", 00:18:34.160 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:34.160 "is_configured": true, 00:18:34.160 "data_offset": 2048, 00:18:34.160 "data_size": 63488 00:18:34.161 }, 00:18:34.161 { 00:18:34.161 "name": "BaseBdev3", 00:18:34.161 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:34.161 "is_configured": true, 00:18:34.161 "data_offset": 2048, 00:18:34.161 "data_size": 63488 00:18:34.161 } 00:18:34.161 ] 00:18:34.161 }' 00:18:34.161 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.161 08:52:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.728 "name": "raid_bdev1", 00:18:34.728 "uuid": "2e7ffe1d-d03d-4ffe-972b-44052f8e3601", 00:18:34.728 "strip_size_kb": 64, 00:18:34.728 "state": "online", 00:18:34.728 "raid_level": "raid5f", 00:18:34.728 "superblock": true, 00:18:34.728 "num_base_bdevs": 3, 00:18:34.728 "num_base_bdevs_discovered": 2, 00:18:34.728 "num_base_bdevs_operational": 2, 00:18:34.728 "base_bdevs_list": [ 00:18:34.728 { 00:18:34.728 "name": null, 00:18:34.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.728 "is_configured": false, 00:18:34.728 "data_offset": 0, 00:18:34.728 "data_size": 63488 00:18:34.728 }, 00:18:34.728 { 00:18:34.728 "name": "BaseBdev2", 00:18:34.728 "uuid": "e68576ca-2731-5d0b-9630-94f1df36cf98", 00:18:34.728 "is_configured": true, 00:18:34.728 "data_offset": 2048, 00:18:34.728 "data_size": 63488 00:18:34.728 }, 00:18:34.728 { 00:18:34.728 "name": "BaseBdev3", 00:18:34.728 "uuid": "edce3b6b-48a6-5299-b7ab-9361d5720f32", 00:18:34.728 "is_configured": true, 00:18:34.728 "data_offset": 2048, 00:18:34.728 "data_size": 63488 00:18:34.728 } 00:18:34.728 ] 00:18:34.728 }' 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82401 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82401 ']' 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82401 00:18:34.728 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82401 00:18:34.729 killing process with pid 82401 00:18:34.729 Received shutdown signal, test time was about 60.000000 seconds 00:18:34.729 00:18:34.729 Latency(us) 00:18:34.729 [2024-11-20T08:52:05.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.729 [2024-11-20T08:52:05.645Z] =================================================================================================================== 00:18:34.729 [2024-11-20T08:52:05.645Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82401' 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82401 00:18:34.729 [2024-11-20 08:52:05.610813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.729 08:52:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82401 00:18:34.729 [2024-11-20 08:52:05.610971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.729 [2024-11-20 08:52:05.611048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.729 [2024-11-20 08:52:05.611068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:35.295 [2024-11-20 08:52:05.948048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.230 ************************************ 00:18:36.230 END TEST raid5f_rebuild_test_sb 00:18:36.230 ************************************ 00:18:36.230 08:52:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:36.230 00:18:36.230 real 0m24.694s 00:18:36.230 user 0m33.003s 00:18:36.230 sys 0m2.487s 00:18:36.230 08:52:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.230 08:52:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 08:52:06 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:36.230 08:52:06 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:36.230 08:52:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:36.230 08:52:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.230 08:52:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 ************************************ 00:18:36.230 START TEST raid5f_state_function_test 00:18:36.230 ************************************ 00:18:36.230 08:52:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:18:36.230 08:52:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:36.230 08:52:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:36.230 08:52:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:36.230 08:52:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:36.230 08:52:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:36.230 Process raid pid: 83160 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83160 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83160' 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83160 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83160 ']' 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.230 08:52:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.230 [2024-11-20 08:52:07.116859] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:36.230 [2024-11-20 08:52:07.117301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.489 [2024-11-20 08:52:07.302411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.748 [2024-11-20 08:52:07.434383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.748 [2024-11-20 08:52:07.642115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.748 [2024-11-20 08:52:07.642385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.315 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.315 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:37.315 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:37.315 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.315 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.315 [2024-11-20 08:52:08.070273] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.315 [2024-11-20 08:52:08.070514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.315 [2024-11-20 08:52:08.070545] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.315 [2024-11-20 08:52:08.070564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.316 [2024-11-20 08:52:08.070574] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:37.316 [2024-11-20 08:52:08.070588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:37.316 [2024-11-20 08:52:08.070598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:37.316 [2024-11-20 08:52:08.070611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.316 "name": "Existed_Raid", 00:18:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.316 "strip_size_kb": 64, 00:18:37.316 "state": "configuring", 00:18:37.316 "raid_level": "raid5f", 00:18:37.316 "superblock": false, 00:18:37.316 "num_base_bdevs": 4, 00:18:37.316 "num_base_bdevs_discovered": 0, 00:18:37.316 "num_base_bdevs_operational": 4, 00:18:37.316 "base_bdevs_list": [ 00:18:37.316 { 00:18:37.316 "name": "BaseBdev1", 00:18:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.316 "is_configured": false, 00:18:37.316 "data_offset": 0, 00:18:37.316 "data_size": 0 00:18:37.316 }, 00:18:37.316 { 00:18:37.316 "name": "BaseBdev2", 00:18:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.316 "is_configured": false, 00:18:37.316 "data_offset": 0, 00:18:37.316 "data_size": 0 00:18:37.316 }, 00:18:37.316 { 00:18:37.316 "name": "BaseBdev3", 00:18:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.316 "is_configured": false, 00:18:37.316 "data_offset": 0, 00:18:37.316 "data_size": 0 00:18:37.316 }, 00:18:37.316 { 00:18:37.316 "name": "BaseBdev4", 00:18:37.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.316 "is_configured": false, 00:18:37.316 "data_offset": 0, 00:18:37.316 "data_size": 0 00:18:37.316 } 00:18:37.316 ] 00:18:37.316 }' 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.316 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 [2024-11-20 08:52:08.582288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:37.884 [2024-11-20 08:52:08.582487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 [2024-11-20 08:52:08.590276] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.884 [2024-11-20 08:52:08.590455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.884 [2024-11-20 08:52:08.590590] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:37.884 [2024-11-20 08:52:08.590652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:37.884 [2024-11-20 08:52:08.590767] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:37.884 [2024-11-20 08:52:08.590829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:37.884 [2024-11-20 08:52:08.590979] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:37.884 [2024-11-20 08:52:08.591014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 [2024-11-20 08:52:08.635743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.884 BaseBdev1 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 [ 00:18:37.884 { 00:18:37.884 "name": "BaseBdev1", 00:18:37.884 "aliases": [ 00:18:37.884 "b6ba18a3-97f8-40ca-8ceb-f173ec29a992" 00:18:37.884 ], 00:18:37.884 "product_name": "Malloc disk", 00:18:37.884 "block_size": 512, 00:18:37.884 "num_blocks": 65536, 00:18:37.884 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:37.884 "assigned_rate_limits": { 00:18:37.884 "rw_ios_per_sec": 0, 00:18:37.884 "rw_mbytes_per_sec": 0, 00:18:37.884 "r_mbytes_per_sec": 0, 00:18:37.884 "w_mbytes_per_sec": 0 00:18:37.884 }, 00:18:37.884 "claimed": true, 00:18:37.884 "claim_type": "exclusive_write", 00:18:37.884 "zoned": false, 00:18:37.884 "supported_io_types": { 00:18:37.884 "read": true, 00:18:37.884 "write": true, 00:18:37.884 "unmap": true, 00:18:37.884 "flush": true, 00:18:37.884 "reset": true, 00:18:37.884 "nvme_admin": false, 00:18:37.884 "nvme_io": false, 00:18:37.884 "nvme_io_md": false, 00:18:37.884 "write_zeroes": true, 00:18:37.884 "zcopy": true, 00:18:37.884 "get_zone_info": false, 00:18:37.884 "zone_management": false, 00:18:37.884 "zone_append": false, 00:18:37.884 "compare": false, 00:18:37.884 "compare_and_write": false, 00:18:37.884 "abort": true, 00:18:37.884 "seek_hole": false, 00:18:37.884 "seek_data": false, 00:18:37.884 "copy": true, 00:18:37.884 "nvme_iov_md": false 00:18:37.884 }, 00:18:37.884 "memory_domains": [ 00:18:37.884 { 00:18:37.884 "dma_device_id": "system", 00:18:37.884 "dma_device_type": 1 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.884 "dma_device_type": 2 00:18:37.884 } 00:18:37.884 ], 00:18:37.884 "driver_specific": {} 00:18:37.884 } 00:18:37.884 ] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.884 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.884 "name": "Existed_Raid", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.884 "strip_size_kb": 64, 00:18:37.884 "state": "configuring", 00:18:37.884 "raid_level": "raid5f", 00:18:37.884 "superblock": false, 00:18:37.884 "num_base_bdevs": 4, 00:18:37.884 "num_base_bdevs_discovered": 1, 00:18:37.884 "num_base_bdevs_operational": 4, 00:18:37.884 "base_bdevs_list": [ 00:18:37.884 { 00:18:37.884 "name": "BaseBdev1", 00:18:37.884 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:37.884 "is_configured": true, 00:18:37.884 "data_offset": 0, 00:18:37.884 "data_size": 65536 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "name": "BaseBdev2", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.884 "is_configured": false, 00:18:37.884 "data_offset": 0, 00:18:37.884 "data_size": 0 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "name": "BaseBdev3", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.884 "is_configured": false, 00:18:37.884 "data_offset": 0, 00:18:37.884 "data_size": 0 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "name": "BaseBdev4", 00:18:37.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.884 "is_configured": false, 00:18:37.884 "data_offset": 0, 00:18:37.884 "data_size": 0 00:18:37.884 } 00:18:37.884 ] 00:18:37.885 }' 00:18:37.885 08:52:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.885 08:52:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.453 [2024-11-20 08:52:09.155914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.453 [2024-11-20 08:52:09.156131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.453 [2024-11-20 08:52:09.163970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.453 [2024-11-20 08:52:09.166666] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.453 [2024-11-20 08:52:09.166861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.453 [2024-11-20 08:52:09.166998] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.453 [2024-11-20 08:52:09.167177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.453 [2024-11-20 08:52:09.167307] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:38.453 [2024-11-20 08:52:09.167379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.453 "name": "Existed_Raid", 00:18:38.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.453 "strip_size_kb": 64, 00:18:38.453 "state": "configuring", 00:18:38.453 "raid_level": "raid5f", 00:18:38.453 "superblock": false, 00:18:38.453 "num_base_bdevs": 4, 00:18:38.453 "num_base_bdevs_discovered": 1, 00:18:38.453 "num_base_bdevs_operational": 4, 00:18:38.453 "base_bdevs_list": [ 00:18:38.453 { 00:18:38.453 "name": "BaseBdev1", 00:18:38.453 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:38.453 "is_configured": true, 00:18:38.453 "data_offset": 0, 00:18:38.453 "data_size": 65536 00:18:38.453 }, 00:18:38.453 { 00:18:38.453 "name": "BaseBdev2", 00:18:38.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.453 "is_configured": false, 00:18:38.453 "data_offset": 0, 00:18:38.453 "data_size": 0 00:18:38.453 }, 00:18:38.453 { 00:18:38.453 "name": "BaseBdev3", 00:18:38.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.453 "is_configured": false, 00:18:38.453 "data_offset": 0, 00:18:38.453 "data_size": 0 00:18:38.453 }, 00:18:38.453 { 00:18:38.453 "name": "BaseBdev4", 00:18:38.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.453 "is_configured": false, 00:18:38.453 "data_offset": 0, 00:18:38.453 "data_size": 0 00:18:38.453 } 00:18:38.453 ] 00:18:38.453 }' 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.453 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.021 BaseBdev2 00:18:39.021 [2024-11-20 08:52:09.702551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.021 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.021 [ 00:18:39.021 { 00:18:39.021 "name": "BaseBdev2", 00:18:39.021 "aliases": [ 00:18:39.021 "a9b6ca1a-1fdd-4967-91d4-b769480e8eed" 00:18:39.021 ], 00:18:39.021 "product_name": "Malloc disk", 00:18:39.021 "block_size": 512, 00:18:39.021 "num_blocks": 65536, 00:18:39.021 "uuid": "a9b6ca1a-1fdd-4967-91d4-b769480e8eed", 00:18:39.021 "assigned_rate_limits": { 00:18:39.021 "rw_ios_per_sec": 0, 00:18:39.021 "rw_mbytes_per_sec": 0, 00:18:39.021 "r_mbytes_per_sec": 0, 00:18:39.021 "w_mbytes_per_sec": 0 00:18:39.021 }, 00:18:39.021 "claimed": true, 00:18:39.021 "claim_type": "exclusive_write", 00:18:39.021 "zoned": false, 00:18:39.021 "supported_io_types": { 00:18:39.021 "read": true, 00:18:39.021 "write": true, 00:18:39.021 "unmap": true, 00:18:39.021 "flush": true, 00:18:39.021 "reset": true, 00:18:39.021 "nvme_admin": false, 00:18:39.021 "nvme_io": false, 00:18:39.021 "nvme_io_md": false, 00:18:39.021 "write_zeroes": true, 00:18:39.021 "zcopy": true, 00:18:39.021 "get_zone_info": false, 00:18:39.021 "zone_management": false, 00:18:39.021 "zone_append": false, 00:18:39.021 "compare": false, 00:18:39.021 "compare_and_write": false, 00:18:39.021 "abort": true, 00:18:39.021 "seek_hole": false, 00:18:39.021 "seek_data": false, 00:18:39.021 "copy": true, 00:18:39.021 "nvme_iov_md": false 00:18:39.021 }, 00:18:39.021 "memory_domains": [ 00:18:39.021 { 00:18:39.021 "dma_device_id": "system", 00:18:39.021 "dma_device_type": 1 00:18:39.022 }, 00:18:39.022 { 00:18:39.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.022 "dma_device_type": 2 00:18:39.022 } 00:18:39.022 ], 00:18:39.022 "driver_specific": {} 00:18:39.022 } 00:18:39.022 ] 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.022 "name": "Existed_Raid", 00:18:39.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.022 "strip_size_kb": 64, 00:18:39.022 "state": "configuring", 00:18:39.022 "raid_level": "raid5f", 00:18:39.022 "superblock": false, 00:18:39.022 "num_base_bdevs": 4, 00:18:39.022 "num_base_bdevs_discovered": 2, 00:18:39.022 "num_base_bdevs_operational": 4, 00:18:39.022 "base_bdevs_list": [ 00:18:39.022 { 00:18:39.022 "name": "BaseBdev1", 00:18:39.022 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:39.022 "is_configured": true, 00:18:39.022 "data_offset": 0, 00:18:39.022 "data_size": 65536 00:18:39.022 }, 00:18:39.022 { 00:18:39.022 "name": "BaseBdev2", 00:18:39.022 "uuid": "a9b6ca1a-1fdd-4967-91d4-b769480e8eed", 00:18:39.022 "is_configured": true, 00:18:39.022 "data_offset": 0, 00:18:39.022 "data_size": 65536 00:18:39.022 }, 00:18:39.022 { 00:18:39.022 "name": "BaseBdev3", 00:18:39.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.022 "is_configured": false, 00:18:39.022 "data_offset": 0, 00:18:39.022 "data_size": 0 00:18:39.022 }, 00:18:39.022 { 00:18:39.022 "name": "BaseBdev4", 00:18:39.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.022 "is_configured": false, 00:18:39.022 "data_offset": 0, 00:18:39.022 "data_size": 0 00:18:39.022 } 00:18:39.022 ] 00:18:39.022 }' 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.022 08:52:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 [2024-11-20 08:52:10.307076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.591 BaseBdev3 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 [ 00:18:39.591 { 00:18:39.591 "name": "BaseBdev3", 00:18:39.591 "aliases": [ 00:18:39.591 "d47cc238-d6ca-4069-9087-cddaef057bcf" 00:18:39.591 ], 00:18:39.591 "product_name": "Malloc disk", 00:18:39.591 "block_size": 512, 00:18:39.591 "num_blocks": 65536, 00:18:39.591 "uuid": "d47cc238-d6ca-4069-9087-cddaef057bcf", 00:18:39.591 "assigned_rate_limits": { 00:18:39.591 "rw_ios_per_sec": 0, 00:18:39.591 "rw_mbytes_per_sec": 0, 00:18:39.591 "r_mbytes_per_sec": 0, 00:18:39.591 "w_mbytes_per_sec": 0 00:18:39.591 }, 00:18:39.591 "claimed": true, 00:18:39.591 "claim_type": "exclusive_write", 00:18:39.591 "zoned": false, 00:18:39.591 "supported_io_types": { 00:18:39.591 "read": true, 00:18:39.591 "write": true, 00:18:39.591 "unmap": true, 00:18:39.591 "flush": true, 00:18:39.591 "reset": true, 00:18:39.591 "nvme_admin": false, 00:18:39.591 "nvme_io": false, 00:18:39.591 "nvme_io_md": false, 00:18:39.591 "write_zeroes": true, 00:18:39.591 "zcopy": true, 00:18:39.591 "get_zone_info": false, 00:18:39.591 "zone_management": false, 00:18:39.591 "zone_append": false, 00:18:39.591 "compare": false, 00:18:39.591 "compare_and_write": false, 00:18:39.591 "abort": true, 00:18:39.591 "seek_hole": false, 00:18:39.591 "seek_data": false, 00:18:39.591 "copy": true, 00:18:39.591 "nvme_iov_md": false 00:18:39.591 }, 00:18:39.591 "memory_domains": [ 00:18:39.591 { 00:18:39.591 "dma_device_id": "system", 00:18:39.591 "dma_device_type": 1 00:18:39.591 }, 00:18:39.591 { 00:18:39.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.591 "dma_device_type": 2 00:18:39.591 } 00:18:39.591 ], 00:18:39.591 "driver_specific": {} 00:18:39.591 } 00:18:39.591 ] 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.591 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.591 "name": "Existed_Raid", 00:18:39.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.591 "strip_size_kb": 64, 00:18:39.591 "state": "configuring", 00:18:39.591 "raid_level": "raid5f", 00:18:39.591 "superblock": false, 00:18:39.591 "num_base_bdevs": 4, 00:18:39.591 "num_base_bdevs_discovered": 3, 00:18:39.591 "num_base_bdevs_operational": 4, 00:18:39.591 "base_bdevs_list": [ 00:18:39.591 { 00:18:39.591 "name": "BaseBdev1", 00:18:39.591 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:39.591 "is_configured": true, 00:18:39.591 "data_offset": 0, 00:18:39.591 "data_size": 65536 00:18:39.591 }, 00:18:39.591 { 00:18:39.591 "name": "BaseBdev2", 00:18:39.591 "uuid": "a9b6ca1a-1fdd-4967-91d4-b769480e8eed", 00:18:39.591 "is_configured": true, 00:18:39.591 "data_offset": 0, 00:18:39.591 "data_size": 65536 00:18:39.591 }, 00:18:39.591 { 00:18:39.591 "name": "BaseBdev3", 00:18:39.591 "uuid": "d47cc238-d6ca-4069-9087-cddaef057bcf", 00:18:39.591 "is_configured": true, 00:18:39.591 "data_offset": 0, 00:18:39.591 "data_size": 65536 00:18:39.591 }, 00:18:39.592 { 00:18:39.592 "name": "BaseBdev4", 00:18:39.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.592 "is_configured": false, 00:18:39.592 "data_offset": 0, 00:18:39.592 "data_size": 0 00:18:39.592 } 00:18:39.592 ] 00:18:39.592 }' 00:18:39.592 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.592 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.160 [2024-11-20 08:52:10.886644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:40.160 [2024-11-20 08:52:10.886885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:40.160 [2024-11-20 08:52:10.886911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:40.160 [2024-11-20 08:52:10.887270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.160 [2024-11-20 08:52:10.894305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:40.160 [2024-11-20 08:52:10.894464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:40.160 [2024-11-20 08:52:10.894978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.160 BaseBdev4 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.160 [ 00:18:40.160 { 00:18:40.160 "name": "BaseBdev4", 00:18:40.160 "aliases": [ 00:18:40.160 "0afa56e7-418b-4ea1-9ddc-03068ef5f7b4" 00:18:40.160 ], 00:18:40.160 "product_name": "Malloc disk", 00:18:40.160 "block_size": 512, 00:18:40.160 "num_blocks": 65536, 00:18:40.160 "uuid": "0afa56e7-418b-4ea1-9ddc-03068ef5f7b4", 00:18:40.160 "assigned_rate_limits": { 00:18:40.160 "rw_ios_per_sec": 0, 00:18:40.160 "rw_mbytes_per_sec": 0, 00:18:40.160 "r_mbytes_per_sec": 0, 00:18:40.160 "w_mbytes_per_sec": 0 00:18:40.160 }, 00:18:40.160 "claimed": true, 00:18:40.160 "claim_type": "exclusive_write", 00:18:40.160 "zoned": false, 00:18:40.160 "supported_io_types": { 00:18:40.160 "read": true, 00:18:40.160 "write": true, 00:18:40.160 "unmap": true, 00:18:40.160 "flush": true, 00:18:40.160 "reset": true, 00:18:40.160 "nvme_admin": false, 00:18:40.160 "nvme_io": false, 00:18:40.160 "nvme_io_md": false, 00:18:40.160 "write_zeroes": true, 00:18:40.160 "zcopy": true, 00:18:40.160 "get_zone_info": false, 00:18:40.160 "zone_management": false, 00:18:40.160 "zone_append": false, 00:18:40.160 "compare": false, 00:18:40.160 "compare_and_write": false, 00:18:40.160 "abort": true, 00:18:40.160 "seek_hole": false, 00:18:40.160 "seek_data": false, 00:18:40.160 "copy": true, 00:18:40.160 "nvme_iov_md": false 00:18:40.160 }, 00:18:40.160 "memory_domains": [ 00:18:40.160 { 00:18:40.160 "dma_device_id": "system", 00:18:40.160 "dma_device_type": 1 00:18:40.160 }, 00:18:40.160 { 00:18:40.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.160 "dma_device_type": 2 00:18:40.160 } 00:18:40.160 ], 00:18:40.160 "driver_specific": {} 00:18:40.160 } 00:18:40.160 ] 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.160 "name": "Existed_Raid", 00:18:40.160 "uuid": "4e007a5d-8486-453b-9ffb-f65b34a0a7b1", 00:18:40.160 "strip_size_kb": 64, 00:18:40.160 "state": "online", 00:18:40.160 "raid_level": "raid5f", 00:18:40.160 "superblock": false, 00:18:40.160 "num_base_bdevs": 4, 00:18:40.160 "num_base_bdevs_discovered": 4, 00:18:40.160 "num_base_bdevs_operational": 4, 00:18:40.160 "base_bdevs_list": [ 00:18:40.160 { 00:18:40.160 "name": "BaseBdev1", 00:18:40.160 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:40.160 "is_configured": true, 00:18:40.160 "data_offset": 0, 00:18:40.160 "data_size": 65536 00:18:40.160 }, 00:18:40.160 { 00:18:40.160 "name": "BaseBdev2", 00:18:40.160 "uuid": "a9b6ca1a-1fdd-4967-91d4-b769480e8eed", 00:18:40.160 "is_configured": true, 00:18:40.160 "data_offset": 0, 00:18:40.160 "data_size": 65536 00:18:40.160 }, 00:18:40.160 { 00:18:40.160 "name": "BaseBdev3", 00:18:40.160 "uuid": "d47cc238-d6ca-4069-9087-cddaef057bcf", 00:18:40.160 "is_configured": true, 00:18:40.160 "data_offset": 0, 00:18:40.160 "data_size": 65536 00:18:40.160 }, 00:18:40.160 { 00:18:40.160 "name": "BaseBdev4", 00:18:40.160 "uuid": "0afa56e7-418b-4ea1-9ddc-03068ef5f7b4", 00:18:40.160 "is_configured": true, 00:18:40.160 "data_offset": 0, 00:18:40.160 "data_size": 65536 00:18:40.160 } 00:18:40.160 ] 00:18:40.160 }' 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.160 08:52:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.729 [2024-11-20 08:52:11.466829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.729 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.729 "name": "Existed_Raid", 00:18:40.729 "aliases": [ 00:18:40.729 "4e007a5d-8486-453b-9ffb-f65b34a0a7b1" 00:18:40.729 ], 00:18:40.729 "product_name": "Raid Volume", 00:18:40.729 "block_size": 512, 00:18:40.729 "num_blocks": 196608, 00:18:40.729 "uuid": "4e007a5d-8486-453b-9ffb-f65b34a0a7b1", 00:18:40.729 "assigned_rate_limits": { 00:18:40.729 "rw_ios_per_sec": 0, 00:18:40.729 "rw_mbytes_per_sec": 0, 00:18:40.729 "r_mbytes_per_sec": 0, 00:18:40.729 "w_mbytes_per_sec": 0 00:18:40.729 }, 00:18:40.729 "claimed": false, 00:18:40.729 "zoned": false, 00:18:40.729 "supported_io_types": { 00:18:40.729 "read": true, 00:18:40.729 "write": true, 00:18:40.729 "unmap": false, 00:18:40.729 "flush": false, 00:18:40.729 "reset": true, 00:18:40.729 "nvme_admin": false, 00:18:40.729 "nvme_io": false, 00:18:40.729 "nvme_io_md": false, 00:18:40.729 "write_zeroes": true, 00:18:40.729 "zcopy": false, 00:18:40.729 "get_zone_info": false, 00:18:40.729 "zone_management": false, 00:18:40.729 "zone_append": false, 00:18:40.729 "compare": false, 00:18:40.729 "compare_and_write": false, 00:18:40.729 "abort": false, 00:18:40.729 "seek_hole": false, 00:18:40.729 "seek_data": false, 00:18:40.729 "copy": false, 00:18:40.729 "nvme_iov_md": false 00:18:40.729 }, 00:18:40.729 "driver_specific": { 00:18:40.729 "raid": { 00:18:40.729 "uuid": "4e007a5d-8486-453b-9ffb-f65b34a0a7b1", 00:18:40.729 "strip_size_kb": 64, 00:18:40.729 "state": "online", 00:18:40.729 "raid_level": "raid5f", 00:18:40.729 "superblock": false, 00:18:40.729 "num_base_bdevs": 4, 00:18:40.729 "num_base_bdevs_discovered": 4, 00:18:40.729 "num_base_bdevs_operational": 4, 00:18:40.729 "base_bdevs_list": [ 00:18:40.729 { 00:18:40.729 "name": "BaseBdev1", 00:18:40.729 "uuid": "b6ba18a3-97f8-40ca-8ceb-f173ec29a992", 00:18:40.729 "is_configured": true, 00:18:40.729 "data_offset": 0, 00:18:40.729 "data_size": 65536 00:18:40.729 }, 00:18:40.729 { 00:18:40.729 "name": "BaseBdev2", 00:18:40.729 "uuid": "a9b6ca1a-1fdd-4967-91d4-b769480e8eed", 00:18:40.729 "is_configured": true, 00:18:40.729 "data_offset": 0, 00:18:40.729 "data_size": 65536 00:18:40.729 }, 00:18:40.729 { 00:18:40.729 "name": "BaseBdev3", 00:18:40.729 "uuid": "d47cc238-d6ca-4069-9087-cddaef057bcf", 00:18:40.729 "is_configured": true, 00:18:40.729 "data_offset": 0, 00:18:40.729 "data_size": 65536 00:18:40.729 }, 00:18:40.729 { 00:18:40.729 "name": "BaseBdev4", 00:18:40.729 "uuid": "0afa56e7-418b-4ea1-9ddc-03068ef5f7b4", 00:18:40.729 "is_configured": true, 00:18:40.729 "data_offset": 0, 00:18:40.729 "data_size": 65536 00:18:40.730 } 00:18:40.730 ] 00:18:40.730 } 00:18:40.730 } 00:18:40.730 }' 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:40.730 BaseBdev2 00:18:40.730 BaseBdev3 00:18:40.730 BaseBdev4' 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.730 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.988 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.988 [2024-11-20 08:52:11.842754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.247 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.248 "name": "Existed_Raid", 00:18:41.248 "uuid": "4e007a5d-8486-453b-9ffb-f65b34a0a7b1", 00:18:41.248 "strip_size_kb": 64, 00:18:41.248 "state": "online", 00:18:41.248 "raid_level": "raid5f", 00:18:41.248 "superblock": false, 00:18:41.248 "num_base_bdevs": 4, 00:18:41.248 "num_base_bdevs_discovered": 3, 00:18:41.248 "num_base_bdevs_operational": 3, 00:18:41.248 "base_bdevs_list": [ 00:18:41.248 { 00:18:41.248 "name": null, 00:18:41.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.248 "is_configured": false, 00:18:41.248 "data_offset": 0, 00:18:41.248 "data_size": 65536 00:18:41.248 }, 00:18:41.248 { 00:18:41.248 "name": "BaseBdev2", 00:18:41.248 "uuid": "a9b6ca1a-1fdd-4967-91d4-b769480e8eed", 00:18:41.248 "is_configured": true, 00:18:41.248 "data_offset": 0, 00:18:41.248 "data_size": 65536 00:18:41.248 }, 00:18:41.248 { 00:18:41.248 "name": "BaseBdev3", 00:18:41.248 "uuid": "d47cc238-d6ca-4069-9087-cddaef057bcf", 00:18:41.248 "is_configured": true, 00:18:41.248 "data_offset": 0, 00:18:41.248 "data_size": 65536 00:18:41.248 }, 00:18:41.248 { 00:18:41.248 "name": "BaseBdev4", 00:18:41.248 "uuid": "0afa56e7-418b-4ea1-9ddc-03068ef5f7b4", 00:18:41.248 "is_configured": true, 00:18:41.248 "data_offset": 0, 00:18:41.248 "data_size": 65536 00:18:41.248 } 00:18:41.248 ] 00:18:41.248 }' 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.248 08:52:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 [2024-11-20 08:52:12.494228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:41.817 [2024-11-20 08:52:12.494498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.817 [2024-11-20 08:52:12.573501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 [2024-11-20 08:52:12.629536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.817 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.076 [2024-11-20 08:52:12.773479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:42.076 [2024-11-20 08:52:12.773754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.076 BaseBdev2 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.076 [ 00:18:42.076 { 00:18:42.076 "name": "BaseBdev2", 00:18:42.076 "aliases": [ 00:18:42.076 "e0f9335f-404f-4ebd-a392-b7887deb5477" 00:18:42.076 ], 00:18:42.076 "product_name": "Malloc disk", 00:18:42.076 "block_size": 512, 00:18:42.076 "num_blocks": 65536, 00:18:42.076 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:42.076 "assigned_rate_limits": { 00:18:42.076 "rw_ios_per_sec": 0, 00:18:42.076 "rw_mbytes_per_sec": 0, 00:18:42.076 "r_mbytes_per_sec": 0, 00:18:42.076 "w_mbytes_per_sec": 0 00:18:42.076 }, 00:18:42.076 "claimed": false, 00:18:42.076 "zoned": false, 00:18:42.076 "supported_io_types": { 00:18:42.076 "read": true, 00:18:42.076 "write": true, 00:18:42.076 "unmap": true, 00:18:42.076 "flush": true, 00:18:42.076 "reset": true, 00:18:42.076 "nvme_admin": false, 00:18:42.076 "nvme_io": false, 00:18:42.076 "nvme_io_md": false, 00:18:42.076 "write_zeroes": true, 00:18:42.076 "zcopy": true, 00:18:42.076 "get_zone_info": false, 00:18:42.076 "zone_management": false, 00:18:42.076 "zone_append": false, 00:18:42.076 "compare": false, 00:18:42.076 "compare_and_write": false, 00:18:42.076 "abort": true, 00:18:42.076 "seek_hole": false, 00:18:42.076 "seek_data": false, 00:18:42.076 "copy": true, 00:18:42.076 "nvme_iov_md": false 00:18:42.076 }, 00:18:42.076 "memory_domains": [ 00:18:42.076 { 00:18:42.076 "dma_device_id": "system", 00:18:42.076 "dma_device_type": 1 00:18:42.076 }, 00:18:42.076 { 00:18:42.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.076 "dma_device_type": 2 00:18:42.076 } 00:18:42.076 ], 00:18:42.076 "driver_specific": {} 00:18:42.076 } 00:18:42.076 ] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.076 08:52:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.335 BaseBdev3 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.335 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.335 [ 00:18:42.335 { 00:18:42.335 "name": "BaseBdev3", 00:18:42.335 "aliases": [ 00:18:42.335 "9d494eeb-756d-4ab0-87fe-312cb7908a26" 00:18:42.335 ], 00:18:42.335 "product_name": "Malloc disk", 00:18:42.335 "block_size": 512, 00:18:42.335 "num_blocks": 65536, 00:18:42.335 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:42.335 "assigned_rate_limits": { 00:18:42.335 "rw_ios_per_sec": 0, 00:18:42.335 "rw_mbytes_per_sec": 0, 00:18:42.335 "r_mbytes_per_sec": 0, 00:18:42.335 "w_mbytes_per_sec": 0 00:18:42.335 }, 00:18:42.335 "claimed": false, 00:18:42.335 "zoned": false, 00:18:42.335 "supported_io_types": { 00:18:42.335 "read": true, 00:18:42.335 "write": true, 00:18:42.335 "unmap": true, 00:18:42.335 "flush": true, 00:18:42.335 "reset": true, 00:18:42.336 "nvme_admin": false, 00:18:42.336 "nvme_io": false, 00:18:42.336 "nvme_io_md": false, 00:18:42.336 "write_zeroes": true, 00:18:42.336 "zcopy": true, 00:18:42.336 "get_zone_info": false, 00:18:42.336 "zone_management": false, 00:18:42.336 "zone_append": false, 00:18:42.336 "compare": false, 00:18:42.336 "compare_and_write": false, 00:18:42.336 "abort": true, 00:18:42.336 "seek_hole": false, 00:18:42.336 "seek_data": false, 00:18:42.336 "copy": true, 00:18:42.336 "nvme_iov_md": false 00:18:42.336 }, 00:18:42.336 "memory_domains": [ 00:18:42.336 { 00:18:42.336 "dma_device_id": "system", 00:18:42.336 "dma_device_type": 1 00:18:42.336 }, 00:18:42.336 { 00:18:42.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.336 "dma_device_type": 2 00:18:42.336 } 00:18:42.336 ], 00:18:42.336 "driver_specific": {} 00:18:42.336 } 00:18:42.336 ] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.336 BaseBdev4 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.336 [ 00:18:42.336 { 00:18:42.336 "name": "BaseBdev4", 00:18:42.336 "aliases": [ 00:18:42.336 "505e5c59-bc23-4cd6-963a-d84295ecf994" 00:18:42.336 ], 00:18:42.336 "product_name": "Malloc disk", 00:18:42.336 "block_size": 512, 00:18:42.336 "num_blocks": 65536, 00:18:42.336 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:42.336 "assigned_rate_limits": { 00:18:42.336 "rw_ios_per_sec": 0, 00:18:42.336 "rw_mbytes_per_sec": 0, 00:18:42.336 "r_mbytes_per_sec": 0, 00:18:42.336 "w_mbytes_per_sec": 0 00:18:42.336 }, 00:18:42.336 "claimed": false, 00:18:42.336 "zoned": false, 00:18:42.336 "supported_io_types": { 00:18:42.336 "read": true, 00:18:42.336 "write": true, 00:18:42.336 "unmap": true, 00:18:42.336 "flush": true, 00:18:42.336 "reset": true, 00:18:42.336 "nvme_admin": false, 00:18:42.336 "nvme_io": false, 00:18:42.336 "nvme_io_md": false, 00:18:42.336 "write_zeroes": true, 00:18:42.336 "zcopy": true, 00:18:42.336 "get_zone_info": false, 00:18:42.336 "zone_management": false, 00:18:42.336 "zone_append": false, 00:18:42.336 "compare": false, 00:18:42.336 "compare_and_write": false, 00:18:42.336 "abort": true, 00:18:42.336 "seek_hole": false, 00:18:42.336 "seek_data": false, 00:18:42.336 "copy": true, 00:18:42.336 "nvme_iov_md": false 00:18:42.336 }, 00:18:42.336 "memory_domains": [ 00:18:42.336 { 00:18:42.336 "dma_device_id": "system", 00:18:42.336 "dma_device_type": 1 00:18:42.336 }, 00:18:42.336 { 00:18:42.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.336 "dma_device_type": 2 00:18:42.336 } 00:18:42.336 ], 00:18:42.336 "driver_specific": {} 00:18:42.336 } 00:18:42.336 ] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.336 [2024-11-20 08:52:13.118095] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.336 [2024-11-20 08:52:13.118181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.336 [2024-11-20 08:52:13.118214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.336 [2024-11-20 08:52:13.120617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.336 [2024-11-20 08:52:13.120697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.336 "name": "Existed_Raid", 00:18:42.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.336 "strip_size_kb": 64, 00:18:42.336 "state": "configuring", 00:18:42.336 "raid_level": "raid5f", 00:18:42.336 "superblock": false, 00:18:42.336 "num_base_bdevs": 4, 00:18:42.336 "num_base_bdevs_discovered": 3, 00:18:42.336 "num_base_bdevs_operational": 4, 00:18:42.336 "base_bdevs_list": [ 00:18:42.336 { 00:18:42.336 "name": "BaseBdev1", 00:18:42.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.336 "is_configured": false, 00:18:42.336 "data_offset": 0, 00:18:42.336 "data_size": 0 00:18:42.336 }, 00:18:42.336 { 00:18:42.336 "name": "BaseBdev2", 00:18:42.336 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:42.336 "is_configured": true, 00:18:42.336 "data_offset": 0, 00:18:42.336 "data_size": 65536 00:18:42.336 }, 00:18:42.336 { 00:18:42.336 "name": "BaseBdev3", 00:18:42.336 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:42.336 "is_configured": true, 00:18:42.336 "data_offset": 0, 00:18:42.336 "data_size": 65536 00:18:42.336 }, 00:18:42.336 { 00:18:42.336 "name": "BaseBdev4", 00:18:42.336 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:42.336 "is_configured": true, 00:18:42.336 "data_offset": 0, 00:18:42.336 "data_size": 65536 00:18:42.336 } 00:18:42.336 ] 00:18:42.336 }' 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.336 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.903 [2024-11-20 08:52:13.622280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.903 "name": "Existed_Raid", 00:18:42.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.903 "strip_size_kb": 64, 00:18:42.903 "state": "configuring", 00:18:42.903 "raid_level": "raid5f", 00:18:42.903 "superblock": false, 00:18:42.903 "num_base_bdevs": 4, 00:18:42.903 "num_base_bdevs_discovered": 2, 00:18:42.903 "num_base_bdevs_operational": 4, 00:18:42.903 "base_bdevs_list": [ 00:18:42.903 { 00:18:42.903 "name": "BaseBdev1", 00:18:42.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.903 "is_configured": false, 00:18:42.903 "data_offset": 0, 00:18:42.903 "data_size": 0 00:18:42.903 }, 00:18:42.903 { 00:18:42.903 "name": null, 00:18:42.903 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:42.903 "is_configured": false, 00:18:42.903 "data_offset": 0, 00:18:42.903 "data_size": 65536 00:18:42.903 }, 00:18:42.903 { 00:18:42.903 "name": "BaseBdev3", 00:18:42.903 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:42.903 "is_configured": true, 00:18:42.903 "data_offset": 0, 00:18:42.903 "data_size": 65536 00:18:42.903 }, 00:18:42.903 { 00:18:42.903 "name": "BaseBdev4", 00:18:42.903 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:42.903 "is_configured": true, 00:18:42.903 "data_offset": 0, 00:18:42.903 "data_size": 65536 00:18:42.903 } 00:18:42.903 ] 00:18:42.903 }' 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.903 08:52:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.470 [2024-11-20 08:52:14.244478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.470 BaseBdev1 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.470 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.470 [ 00:18:43.470 { 00:18:43.470 "name": "BaseBdev1", 00:18:43.470 "aliases": [ 00:18:43.470 "924b0d57-518d-4896-8146-cc98893fc28a" 00:18:43.470 ], 00:18:43.470 "product_name": "Malloc disk", 00:18:43.470 "block_size": 512, 00:18:43.470 "num_blocks": 65536, 00:18:43.470 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:43.470 "assigned_rate_limits": { 00:18:43.470 "rw_ios_per_sec": 0, 00:18:43.470 "rw_mbytes_per_sec": 0, 00:18:43.470 "r_mbytes_per_sec": 0, 00:18:43.470 "w_mbytes_per_sec": 0 00:18:43.470 }, 00:18:43.470 "claimed": true, 00:18:43.470 "claim_type": "exclusive_write", 00:18:43.470 "zoned": false, 00:18:43.470 "supported_io_types": { 00:18:43.470 "read": true, 00:18:43.470 "write": true, 00:18:43.470 "unmap": true, 00:18:43.471 "flush": true, 00:18:43.471 "reset": true, 00:18:43.471 "nvme_admin": false, 00:18:43.471 "nvme_io": false, 00:18:43.471 "nvme_io_md": false, 00:18:43.471 "write_zeroes": true, 00:18:43.471 "zcopy": true, 00:18:43.471 "get_zone_info": false, 00:18:43.471 "zone_management": false, 00:18:43.471 "zone_append": false, 00:18:43.471 "compare": false, 00:18:43.471 "compare_and_write": false, 00:18:43.471 "abort": true, 00:18:43.471 "seek_hole": false, 00:18:43.471 "seek_data": false, 00:18:43.471 "copy": true, 00:18:43.471 "nvme_iov_md": false 00:18:43.471 }, 00:18:43.471 "memory_domains": [ 00:18:43.471 { 00:18:43.471 "dma_device_id": "system", 00:18:43.471 "dma_device_type": 1 00:18:43.471 }, 00:18:43.471 { 00:18:43.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.471 "dma_device_type": 2 00:18:43.471 } 00:18:43.471 ], 00:18:43.471 "driver_specific": {} 00:18:43.471 } 00:18:43.471 ] 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.471 "name": "Existed_Raid", 00:18:43.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.471 "strip_size_kb": 64, 00:18:43.471 "state": "configuring", 00:18:43.471 "raid_level": "raid5f", 00:18:43.471 "superblock": false, 00:18:43.471 "num_base_bdevs": 4, 00:18:43.471 "num_base_bdevs_discovered": 3, 00:18:43.471 "num_base_bdevs_operational": 4, 00:18:43.471 "base_bdevs_list": [ 00:18:43.471 { 00:18:43.471 "name": "BaseBdev1", 00:18:43.471 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:43.471 "is_configured": true, 00:18:43.471 "data_offset": 0, 00:18:43.471 "data_size": 65536 00:18:43.471 }, 00:18:43.471 { 00:18:43.471 "name": null, 00:18:43.471 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:43.471 "is_configured": false, 00:18:43.471 "data_offset": 0, 00:18:43.471 "data_size": 65536 00:18:43.471 }, 00:18:43.471 { 00:18:43.471 "name": "BaseBdev3", 00:18:43.471 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:43.471 "is_configured": true, 00:18:43.471 "data_offset": 0, 00:18:43.471 "data_size": 65536 00:18:43.471 }, 00:18:43.471 { 00:18:43.471 "name": "BaseBdev4", 00:18:43.471 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:43.471 "is_configured": true, 00:18:43.471 "data_offset": 0, 00:18:43.471 "data_size": 65536 00:18:43.471 } 00:18:43.471 ] 00:18:43.471 }' 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.471 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.036 [2024-11-20 08:52:14.800704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.036 "name": "Existed_Raid", 00:18:44.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.036 "strip_size_kb": 64, 00:18:44.036 "state": "configuring", 00:18:44.036 "raid_level": "raid5f", 00:18:44.036 "superblock": false, 00:18:44.036 "num_base_bdevs": 4, 00:18:44.036 "num_base_bdevs_discovered": 2, 00:18:44.036 "num_base_bdevs_operational": 4, 00:18:44.036 "base_bdevs_list": [ 00:18:44.036 { 00:18:44.036 "name": "BaseBdev1", 00:18:44.036 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:44.036 "is_configured": true, 00:18:44.036 "data_offset": 0, 00:18:44.036 "data_size": 65536 00:18:44.036 }, 00:18:44.036 { 00:18:44.036 "name": null, 00:18:44.036 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:44.036 "is_configured": false, 00:18:44.036 "data_offset": 0, 00:18:44.036 "data_size": 65536 00:18:44.036 }, 00:18:44.036 { 00:18:44.036 "name": null, 00:18:44.036 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:44.036 "is_configured": false, 00:18:44.036 "data_offset": 0, 00:18:44.036 "data_size": 65536 00:18:44.036 }, 00:18:44.036 { 00:18:44.036 "name": "BaseBdev4", 00:18:44.036 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:44.036 "is_configured": true, 00:18:44.036 "data_offset": 0, 00:18:44.036 "data_size": 65536 00:18:44.036 } 00:18:44.036 ] 00:18:44.036 }' 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.036 08:52:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.601 [2024-11-20 08:52:15.316850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.601 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.601 "name": "Existed_Raid", 00:18:44.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.601 "strip_size_kb": 64, 00:18:44.601 "state": "configuring", 00:18:44.601 "raid_level": "raid5f", 00:18:44.601 "superblock": false, 00:18:44.601 "num_base_bdevs": 4, 00:18:44.601 "num_base_bdevs_discovered": 3, 00:18:44.601 "num_base_bdevs_operational": 4, 00:18:44.601 "base_bdevs_list": [ 00:18:44.601 { 00:18:44.601 "name": "BaseBdev1", 00:18:44.601 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:44.601 "is_configured": true, 00:18:44.601 "data_offset": 0, 00:18:44.601 "data_size": 65536 00:18:44.601 }, 00:18:44.601 { 00:18:44.602 "name": null, 00:18:44.602 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:44.602 "is_configured": false, 00:18:44.602 "data_offset": 0, 00:18:44.602 "data_size": 65536 00:18:44.602 }, 00:18:44.602 { 00:18:44.602 "name": "BaseBdev3", 00:18:44.602 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:44.602 "is_configured": true, 00:18:44.602 "data_offset": 0, 00:18:44.602 "data_size": 65536 00:18:44.602 }, 00:18:44.602 { 00:18:44.602 "name": "BaseBdev4", 00:18:44.602 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:44.602 "is_configured": true, 00:18:44.602 "data_offset": 0, 00:18:44.602 "data_size": 65536 00:18:44.602 } 00:18:44.602 ] 00:18:44.602 }' 00:18:44.602 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.602 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.170 [2024-11-20 08:52:15.873029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.170 08:52:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.170 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.170 "name": "Existed_Raid", 00:18:45.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.170 "strip_size_kb": 64, 00:18:45.170 "state": "configuring", 00:18:45.170 "raid_level": "raid5f", 00:18:45.170 "superblock": false, 00:18:45.170 "num_base_bdevs": 4, 00:18:45.170 "num_base_bdevs_discovered": 2, 00:18:45.170 "num_base_bdevs_operational": 4, 00:18:45.170 "base_bdevs_list": [ 00:18:45.170 { 00:18:45.170 "name": null, 00:18:45.170 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:45.170 "is_configured": false, 00:18:45.170 "data_offset": 0, 00:18:45.170 "data_size": 65536 00:18:45.170 }, 00:18:45.170 { 00:18:45.170 "name": null, 00:18:45.170 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:45.170 "is_configured": false, 00:18:45.170 "data_offset": 0, 00:18:45.170 "data_size": 65536 00:18:45.170 }, 00:18:45.170 { 00:18:45.170 "name": "BaseBdev3", 00:18:45.170 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:45.170 "is_configured": true, 00:18:45.170 "data_offset": 0, 00:18:45.170 "data_size": 65536 00:18:45.170 }, 00:18:45.170 { 00:18:45.170 "name": "BaseBdev4", 00:18:45.170 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:45.170 "is_configured": true, 00:18:45.170 "data_offset": 0, 00:18:45.170 "data_size": 65536 00:18:45.170 } 00:18:45.170 ] 00:18:45.170 }' 00:18:45.171 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.171 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.738 [2024-11-20 08:52:16.510412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.738 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.738 "name": "Existed_Raid", 00:18:45.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.738 "strip_size_kb": 64, 00:18:45.738 "state": "configuring", 00:18:45.738 "raid_level": "raid5f", 00:18:45.738 "superblock": false, 00:18:45.738 "num_base_bdevs": 4, 00:18:45.738 "num_base_bdevs_discovered": 3, 00:18:45.738 "num_base_bdevs_operational": 4, 00:18:45.738 "base_bdevs_list": [ 00:18:45.738 { 00:18:45.738 "name": null, 00:18:45.738 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:45.738 "is_configured": false, 00:18:45.738 "data_offset": 0, 00:18:45.738 "data_size": 65536 00:18:45.738 }, 00:18:45.738 { 00:18:45.739 "name": "BaseBdev2", 00:18:45.739 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:45.739 "is_configured": true, 00:18:45.739 "data_offset": 0, 00:18:45.739 "data_size": 65536 00:18:45.739 }, 00:18:45.739 { 00:18:45.739 "name": "BaseBdev3", 00:18:45.739 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:45.739 "is_configured": true, 00:18:45.739 "data_offset": 0, 00:18:45.739 "data_size": 65536 00:18:45.739 }, 00:18:45.739 { 00:18:45.739 "name": "BaseBdev4", 00:18:45.739 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:45.739 "is_configured": true, 00:18:45.739 "data_offset": 0, 00:18:45.739 "data_size": 65536 00:18:45.739 } 00:18:45.739 ] 00:18:45.739 }' 00:18:45.739 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.739 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 924b0d57-518d-4896-8146-cc98893fc28a 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 [2024-11-20 08:52:17.147779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:46.305 [2024-11-20 08:52:17.147871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:46.305 [2024-11-20 08:52:17.147884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:46.305 [2024-11-20 08:52:17.148245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:46.305 [2024-11-20 08:52:17.154382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:46.305 [2024-11-20 08:52:17.154416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:46.305 [2024-11-20 08:52:17.154758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.305 NewBaseBdev 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 [ 00:18:46.305 { 00:18:46.305 "name": "NewBaseBdev", 00:18:46.305 "aliases": [ 00:18:46.305 "924b0d57-518d-4896-8146-cc98893fc28a" 00:18:46.305 ], 00:18:46.305 "product_name": "Malloc disk", 00:18:46.305 "block_size": 512, 00:18:46.305 "num_blocks": 65536, 00:18:46.305 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:46.305 "assigned_rate_limits": { 00:18:46.305 "rw_ios_per_sec": 0, 00:18:46.305 "rw_mbytes_per_sec": 0, 00:18:46.305 "r_mbytes_per_sec": 0, 00:18:46.305 "w_mbytes_per_sec": 0 00:18:46.305 }, 00:18:46.305 "claimed": true, 00:18:46.305 "claim_type": "exclusive_write", 00:18:46.305 "zoned": false, 00:18:46.305 "supported_io_types": { 00:18:46.305 "read": true, 00:18:46.305 "write": true, 00:18:46.305 "unmap": true, 00:18:46.305 "flush": true, 00:18:46.305 "reset": true, 00:18:46.305 "nvme_admin": false, 00:18:46.305 "nvme_io": false, 00:18:46.305 "nvme_io_md": false, 00:18:46.305 "write_zeroes": true, 00:18:46.305 "zcopy": true, 00:18:46.305 "get_zone_info": false, 00:18:46.305 "zone_management": false, 00:18:46.305 "zone_append": false, 00:18:46.305 "compare": false, 00:18:46.305 "compare_and_write": false, 00:18:46.305 "abort": true, 00:18:46.305 "seek_hole": false, 00:18:46.305 "seek_data": false, 00:18:46.305 "copy": true, 00:18:46.305 "nvme_iov_md": false 00:18:46.305 }, 00:18:46.305 "memory_domains": [ 00:18:46.305 { 00:18:46.305 "dma_device_id": "system", 00:18:46.305 "dma_device_type": 1 00:18:46.305 }, 00:18:46.305 { 00:18:46.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.305 "dma_device_type": 2 00:18:46.305 } 00:18:46.305 ], 00:18:46.305 "driver_specific": {} 00:18:46.305 } 00:18:46.305 ] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.564 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.564 "name": "Existed_Raid", 00:18:46.564 "uuid": "a0aa3a81-efd5-4ffd-9b49-a2c3f9229a53", 00:18:46.564 "strip_size_kb": 64, 00:18:46.564 "state": "online", 00:18:46.564 "raid_level": "raid5f", 00:18:46.564 "superblock": false, 00:18:46.564 "num_base_bdevs": 4, 00:18:46.564 "num_base_bdevs_discovered": 4, 00:18:46.564 "num_base_bdevs_operational": 4, 00:18:46.564 "base_bdevs_list": [ 00:18:46.564 { 00:18:46.564 "name": "NewBaseBdev", 00:18:46.564 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:46.564 "is_configured": true, 00:18:46.564 "data_offset": 0, 00:18:46.564 "data_size": 65536 00:18:46.564 }, 00:18:46.564 { 00:18:46.564 "name": "BaseBdev2", 00:18:46.564 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:46.564 "is_configured": true, 00:18:46.564 "data_offset": 0, 00:18:46.564 "data_size": 65536 00:18:46.564 }, 00:18:46.564 { 00:18:46.564 "name": "BaseBdev3", 00:18:46.564 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:46.564 "is_configured": true, 00:18:46.564 "data_offset": 0, 00:18:46.564 "data_size": 65536 00:18:46.564 }, 00:18:46.564 { 00:18:46.564 "name": "BaseBdev4", 00:18:46.564 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:46.564 "is_configured": true, 00:18:46.564 "data_offset": 0, 00:18:46.564 "data_size": 65536 00:18:46.564 } 00:18:46.564 ] 00:18:46.564 }' 00:18:46.564 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.564 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.823 [2024-11-20 08:52:17.714676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.823 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.082 "name": "Existed_Raid", 00:18:47.082 "aliases": [ 00:18:47.082 "a0aa3a81-efd5-4ffd-9b49-a2c3f9229a53" 00:18:47.082 ], 00:18:47.082 "product_name": "Raid Volume", 00:18:47.082 "block_size": 512, 00:18:47.082 "num_blocks": 196608, 00:18:47.082 "uuid": "a0aa3a81-efd5-4ffd-9b49-a2c3f9229a53", 00:18:47.082 "assigned_rate_limits": { 00:18:47.082 "rw_ios_per_sec": 0, 00:18:47.082 "rw_mbytes_per_sec": 0, 00:18:47.082 "r_mbytes_per_sec": 0, 00:18:47.082 "w_mbytes_per_sec": 0 00:18:47.082 }, 00:18:47.082 "claimed": false, 00:18:47.082 "zoned": false, 00:18:47.082 "supported_io_types": { 00:18:47.082 "read": true, 00:18:47.082 "write": true, 00:18:47.082 "unmap": false, 00:18:47.082 "flush": false, 00:18:47.082 "reset": true, 00:18:47.082 "nvme_admin": false, 00:18:47.082 "nvme_io": false, 00:18:47.082 "nvme_io_md": false, 00:18:47.082 "write_zeroes": true, 00:18:47.082 "zcopy": false, 00:18:47.082 "get_zone_info": false, 00:18:47.082 "zone_management": false, 00:18:47.082 "zone_append": false, 00:18:47.082 "compare": false, 00:18:47.082 "compare_and_write": false, 00:18:47.082 "abort": false, 00:18:47.082 "seek_hole": false, 00:18:47.082 "seek_data": false, 00:18:47.082 "copy": false, 00:18:47.082 "nvme_iov_md": false 00:18:47.082 }, 00:18:47.082 "driver_specific": { 00:18:47.082 "raid": { 00:18:47.082 "uuid": "a0aa3a81-efd5-4ffd-9b49-a2c3f9229a53", 00:18:47.082 "strip_size_kb": 64, 00:18:47.082 "state": "online", 00:18:47.082 "raid_level": "raid5f", 00:18:47.082 "superblock": false, 00:18:47.082 "num_base_bdevs": 4, 00:18:47.082 "num_base_bdevs_discovered": 4, 00:18:47.082 "num_base_bdevs_operational": 4, 00:18:47.082 "base_bdevs_list": [ 00:18:47.082 { 00:18:47.082 "name": "NewBaseBdev", 00:18:47.082 "uuid": "924b0d57-518d-4896-8146-cc98893fc28a", 00:18:47.082 "is_configured": true, 00:18:47.082 "data_offset": 0, 00:18:47.082 "data_size": 65536 00:18:47.082 }, 00:18:47.082 { 00:18:47.082 "name": "BaseBdev2", 00:18:47.082 "uuid": "e0f9335f-404f-4ebd-a392-b7887deb5477", 00:18:47.082 "is_configured": true, 00:18:47.082 "data_offset": 0, 00:18:47.082 "data_size": 65536 00:18:47.082 }, 00:18:47.082 { 00:18:47.082 "name": "BaseBdev3", 00:18:47.082 "uuid": "9d494eeb-756d-4ab0-87fe-312cb7908a26", 00:18:47.082 "is_configured": true, 00:18:47.082 "data_offset": 0, 00:18:47.082 "data_size": 65536 00:18:47.082 }, 00:18:47.082 { 00:18:47.082 "name": "BaseBdev4", 00:18:47.082 "uuid": "505e5c59-bc23-4cd6-963a-d84295ecf994", 00:18:47.082 "is_configured": true, 00:18:47.082 "data_offset": 0, 00:18:47.082 "data_size": 65536 00:18:47.082 } 00:18:47.082 ] 00:18:47.082 } 00:18:47.082 } 00:18:47.082 }' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:47.082 BaseBdev2 00:18:47.082 BaseBdev3 00:18:47.082 BaseBdev4' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.082 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:47.083 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.083 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.083 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.083 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.342 [2024-11-20 08:52:18.078431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.342 [2024-11-20 08:52:18.078661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.342 [2024-11-20 08:52:18.078875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.342 [2024-11-20 08:52:18.079392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.342 [2024-11-20 08:52:18.079421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83160 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83160 ']' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83160 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83160 00:18:47.342 killing process with pid 83160 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83160' 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83160 00:18:47.342 [2024-11-20 08:52:18.121027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.342 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83160 00:18:47.601 [2024-11-20 08:52:18.452708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.535 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:48.535 00:18:48.535 real 0m12.426s 00:18:48.535 user 0m20.632s 00:18:48.535 sys 0m1.773s 00:18:48.535 ************************************ 00:18:48.535 END TEST raid5f_state_function_test 00:18:48.535 ************************************ 00:18:48.535 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.535 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.794 08:52:19 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:48.794 08:52:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:48.794 08:52:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.794 08:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.794 ************************************ 00:18:48.794 START TEST raid5f_state_function_test_sb 00:18:48.794 ************************************ 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:48.794 Process raid pid: 83843 00:18:48.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83843 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83843' 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83843 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83843 ']' 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.794 08:52:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.794 [2024-11-20 08:52:19.581149] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:18:48.794 [2024-11-20 08:52:19.581590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.124 [2024-11-20 08:52:19.755129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.124 [2024-11-20 08:52:19.882657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.382 [2024-11-20 08:52:20.089110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.382 [2024-11-20 08:52:20.089406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.948 [2024-11-20 08:52:20.585841] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:49.948 [2024-11-20 08:52:20.586069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:49.948 [2024-11-20 08:52:20.586222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.948 [2024-11-20 08:52:20.586288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.948 [2024-11-20 08:52:20.586491] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:49.948 [2024-11-20 08:52:20.586560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:49.948 [2024-11-20 08:52:20.586726] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:49.948 [2024-11-20 08:52:20.586786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.948 "name": "Existed_Raid", 00:18:49.948 "uuid": "3abec479-00db-4fba-af88-c1f01e7d0900", 00:18:49.948 "strip_size_kb": 64, 00:18:49.948 "state": "configuring", 00:18:49.948 "raid_level": "raid5f", 00:18:49.948 "superblock": true, 00:18:49.948 "num_base_bdevs": 4, 00:18:49.948 "num_base_bdevs_discovered": 0, 00:18:49.948 "num_base_bdevs_operational": 4, 00:18:49.948 "base_bdevs_list": [ 00:18:49.948 { 00:18:49.948 "name": "BaseBdev1", 00:18:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.948 "is_configured": false, 00:18:49.948 "data_offset": 0, 00:18:49.948 "data_size": 0 00:18:49.948 }, 00:18:49.948 { 00:18:49.948 "name": "BaseBdev2", 00:18:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.948 "is_configured": false, 00:18:49.948 "data_offset": 0, 00:18:49.948 "data_size": 0 00:18:49.948 }, 00:18:49.948 { 00:18:49.948 "name": "BaseBdev3", 00:18:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.948 "is_configured": false, 00:18:49.948 "data_offset": 0, 00:18:49.948 "data_size": 0 00:18:49.948 }, 00:18:49.948 { 00:18:49.948 "name": "BaseBdev4", 00:18:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.948 "is_configured": false, 00:18:49.948 "data_offset": 0, 00:18:49.948 "data_size": 0 00:18:49.948 } 00:18:49.948 ] 00:18:49.948 }' 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.948 08:52:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.207 [2024-11-20 08:52:21.081939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.207 [2024-11-20 08:52:21.081985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.207 [2024-11-20 08:52:21.093909] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.207 [2024-11-20 08:52:21.093965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.207 [2024-11-20 08:52:21.093982] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.207 [2024-11-20 08:52:21.093999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.207 [2024-11-20 08:52:21.094009] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.207 [2024-11-20 08:52:21.094023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.207 [2024-11-20 08:52:21.094032] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.207 [2024-11-20 08:52:21.094046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.207 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.466 [2024-11-20 08:52:21.138361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.466 BaseBdev1 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.466 [ 00:18:50.466 { 00:18:50.466 "name": "BaseBdev1", 00:18:50.466 "aliases": [ 00:18:50.466 "3ace28d5-412c-47da-9433-5cafd8702e95" 00:18:50.466 ], 00:18:50.466 "product_name": "Malloc disk", 00:18:50.466 "block_size": 512, 00:18:50.466 "num_blocks": 65536, 00:18:50.466 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:50.466 "assigned_rate_limits": { 00:18:50.466 "rw_ios_per_sec": 0, 00:18:50.466 "rw_mbytes_per_sec": 0, 00:18:50.466 "r_mbytes_per_sec": 0, 00:18:50.466 "w_mbytes_per_sec": 0 00:18:50.466 }, 00:18:50.466 "claimed": true, 00:18:50.466 "claim_type": "exclusive_write", 00:18:50.466 "zoned": false, 00:18:50.466 "supported_io_types": { 00:18:50.466 "read": true, 00:18:50.466 "write": true, 00:18:50.466 "unmap": true, 00:18:50.466 "flush": true, 00:18:50.466 "reset": true, 00:18:50.466 "nvme_admin": false, 00:18:50.466 "nvme_io": false, 00:18:50.466 "nvme_io_md": false, 00:18:50.466 "write_zeroes": true, 00:18:50.466 "zcopy": true, 00:18:50.466 "get_zone_info": false, 00:18:50.466 "zone_management": false, 00:18:50.466 "zone_append": false, 00:18:50.466 "compare": false, 00:18:50.466 "compare_and_write": false, 00:18:50.466 "abort": true, 00:18:50.466 "seek_hole": false, 00:18:50.466 "seek_data": false, 00:18:50.466 "copy": true, 00:18:50.466 "nvme_iov_md": false 00:18:50.466 }, 00:18:50.466 "memory_domains": [ 00:18:50.466 { 00:18:50.466 "dma_device_id": "system", 00:18:50.466 "dma_device_type": 1 00:18:50.466 }, 00:18:50.466 { 00:18:50.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.466 "dma_device_type": 2 00:18:50.466 } 00:18:50.466 ], 00:18:50.466 "driver_specific": {} 00:18:50.466 } 00:18:50.466 ] 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.466 "name": "Existed_Raid", 00:18:50.466 "uuid": "7d999228-21e9-4d7e-aa5e-d001951e0019", 00:18:50.466 "strip_size_kb": 64, 00:18:50.466 "state": "configuring", 00:18:50.466 "raid_level": "raid5f", 00:18:50.466 "superblock": true, 00:18:50.466 "num_base_bdevs": 4, 00:18:50.466 "num_base_bdevs_discovered": 1, 00:18:50.466 "num_base_bdevs_operational": 4, 00:18:50.466 "base_bdevs_list": [ 00:18:50.466 { 00:18:50.466 "name": "BaseBdev1", 00:18:50.466 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:50.466 "is_configured": true, 00:18:50.466 "data_offset": 2048, 00:18:50.466 "data_size": 63488 00:18:50.466 }, 00:18:50.466 { 00:18:50.466 "name": "BaseBdev2", 00:18:50.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.466 "is_configured": false, 00:18:50.466 "data_offset": 0, 00:18:50.466 "data_size": 0 00:18:50.466 }, 00:18:50.466 { 00:18:50.466 "name": "BaseBdev3", 00:18:50.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.466 "is_configured": false, 00:18:50.466 "data_offset": 0, 00:18:50.466 "data_size": 0 00:18:50.466 }, 00:18:50.466 { 00:18:50.466 "name": "BaseBdev4", 00:18:50.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.466 "is_configured": false, 00:18:50.466 "data_offset": 0, 00:18:50.466 "data_size": 0 00:18:50.466 } 00:18:50.466 ] 00:18:50.466 }' 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.466 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.034 [2024-11-20 08:52:21.646548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.034 [2024-11-20 08:52:21.646622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.034 [2024-11-20 08:52:21.658659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.034 [2024-11-20 08:52:21.661135] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.034 [2024-11-20 08:52:21.661384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.034 [2024-11-20 08:52:21.661414] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:51.034 [2024-11-20 08:52:21.661435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:51.034 [2024-11-20 08:52:21.661446] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:51.034 [2024-11-20 08:52:21.661459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.034 "name": "Existed_Raid", 00:18:51.034 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:51.034 "strip_size_kb": 64, 00:18:51.034 "state": "configuring", 00:18:51.034 "raid_level": "raid5f", 00:18:51.034 "superblock": true, 00:18:51.034 "num_base_bdevs": 4, 00:18:51.034 "num_base_bdevs_discovered": 1, 00:18:51.034 "num_base_bdevs_operational": 4, 00:18:51.034 "base_bdevs_list": [ 00:18:51.034 { 00:18:51.034 "name": "BaseBdev1", 00:18:51.034 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:51.034 "is_configured": true, 00:18:51.034 "data_offset": 2048, 00:18:51.034 "data_size": 63488 00:18:51.034 }, 00:18:51.034 { 00:18:51.034 "name": "BaseBdev2", 00:18:51.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.034 "is_configured": false, 00:18:51.034 "data_offset": 0, 00:18:51.034 "data_size": 0 00:18:51.034 }, 00:18:51.034 { 00:18:51.034 "name": "BaseBdev3", 00:18:51.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.034 "is_configured": false, 00:18:51.034 "data_offset": 0, 00:18:51.034 "data_size": 0 00:18:51.034 }, 00:18:51.034 { 00:18:51.034 "name": "BaseBdev4", 00:18:51.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.034 "is_configured": false, 00:18:51.034 "data_offset": 0, 00:18:51.034 "data_size": 0 00:18:51.034 } 00:18:51.034 ] 00:18:51.034 }' 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.034 08:52:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.293 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.293 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.293 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.293 [2024-11-20 08:52:22.206188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.552 BaseBdev2 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.552 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.552 [ 00:18:51.552 { 00:18:51.552 "name": "BaseBdev2", 00:18:51.552 "aliases": [ 00:18:51.552 "bc34e7c2-5df5-4008-abff-427b09cc0379" 00:18:51.552 ], 00:18:51.552 "product_name": "Malloc disk", 00:18:51.552 "block_size": 512, 00:18:51.552 "num_blocks": 65536, 00:18:51.552 "uuid": "bc34e7c2-5df5-4008-abff-427b09cc0379", 00:18:51.552 "assigned_rate_limits": { 00:18:51.552 "rw_ios_per_sec": 0, 00:18:51.552 "rw_mbytes_per_sec": 0, 00:18:51.552 "r_mbytes_per_sec": 0, 00:18:51.552 "w_mbytes_per_sec": 0 00:18:51.552 }, 00:18:51.552 "claimed": true, 00:18:51.552 "claim_type": "exclusive_write", 00:18:51.552 "zoned": false, 00:18:51.552 "supported_io_types": { 00:18:51.552 "read": true, 00:18:51.552 "write": true, 00:18:51.553 "unmap": true, 00:18:51.553 "flush": true, 00:18:51.553 "reset": true, 00:18:51.553 "nvme_admin": false, 00:18:51.553 "nvme_io": false, 00:18:51.553 "nvme_io_md": false, 00:18:51.553 "write_zeroes": true, 00:18:51.553 "zcopy": true, 00:18:51.553 "get_zone_info": false, 00:18:51.553 "zone_management": false, 00:18:51.553 "zone_append": false, 00:18:51.553 "compare": false, 00:18:51.553 "compare_and_write": false, 00:18:51.553 "abort": true, 00:18:51.553 "seek_hole": false, 00:18:51.553 "seek_data": false, 00:18:51.553 "copy": true, 00:18:51.553 "nvme_iov_md": false 00:18:51.553 }, 00:18:51.553 "memory_domains": [ 00:18:51.553 { 00:18:51.553 "dma_device_id": "system", 00:18:51.553 "dma_device_type": 1 00:18:51.553 }, 00:18:51.553 { 00:18:51.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.553 "dma_device_type": 2 00:18:51.553 } 00:18:51.553 ], 00:18:51.553 "driver_specific": {} 00:18:51.553 } 00:18:51.553 ] 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.553 "name": "Existed_Raid", 00:18:51.553 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:51.553 "strip_size_kb": 64, 00:18:51.553 "state": "configuring", 00:18:51.553 "raid_level": "raid5f", 00:18:51.553 "superblock": true, 00:18:51.553 "num_base_bdevs": 4, 00:18:51.553 "num_base_bdevs_discovered": 2, 00:18:51.553 "num_base_bdevs_operational": 4, 00:18:51.553 "base_bdevs_list": [ 00:18:51.553 { 00:18:51.553 "name": "BaseBdev1", 00:18:51.553 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:51.553 "is_configured": true, 00:18:51.553 "data_offset": 2048, 00:18:51.553 "data_size": 63488 00:18:51.553 }, 00:18:51.553 { 00:18:51.553 "name": "BaseBdev2", 00:18:51.553 "uuid": "bc34e7c2-5df5-4008-abff-427b09cc0379", 00:18:51.553 "is_configured": true, 00:18:51.553 "data_offset": 2048, 00:18:51.553 "data_size": 63488 00:18:51.553 }, 00:18:51.553 { 00:18:51.553 "name": "BaseBdev3", 00:18:51.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.553 "is_configured": false, 00:18:51.553 "data_offset": 0, 00:18:51.553 "data_size": 0 00:18:51.553 }, 00:18:51.553 { 00:18:51.553 "name": "BaseBdev4", 00:18:51.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.553 "is_configured": false, 00:18:51.553 "data_offset": 0, 00:18:51.553 "data_size": 0 00:18:51.553 } 00:18:51.553 ] 00:18:51.553 }' 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.553 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.120 [2024-11-20 08:52:22.835823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.120 BaseBdev3 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.120 [ 00:18:52.120 { 00:18:52.120 "name": "BaseBdev3", 00:18:52.120 "aliases": [ 00:18:52.120 "b886f297-80e8-4028-80a8-f84f94077f79" 00:18:52.120 ], 00:18:52.120 "product_name": "Malloc disk", 00:18:52.120 "block_size": 512, 00:18:52.120 "num_blocks": 65536, 00:18:52.120 "uuid": "b886f297-80e8-4028-80a8-f84f94077f79", 00:18:52.120 "assigned_rate_limits": { 00:18:52.120 "rw_ios_per_sec": 0, 00:18:52.120 "rw_mbytes_per_sec": 0, 00:18:52.120 "r_mbytes_per_sec": 0, 00:18:52.120 "w_mbytes_per_sec": 0 00:18:52.120 }, 00:18:52.120 "claimed": true, 00:18:52.120 "claim_type": "exclusive_write", 00:18:52.120 "zoned": false, 00:18:52.120 "supported_io_types": { 00:18:52.120 "read": true, 00:18:52.120 "write": true, 00:18:52.120 "unmap": true, 00:18:52.120 "flush": true, 00:18:52.120 "reset": true, 00:18:52.120 "nvme_admin": false, 00:18:52.120 "nvme_io": false, 00:18:52.120 "nvme_io_md": false, 00:18:52.120 "write_zeroes": true, 00:18:52.120 "zcopy": true, 00:18:52.120 "get_zone_info": false, 00:18:52.120 "zone_management": false, 00:18:52.120 "zone_append": false, 00:18:52.120 "compare": false, 00:18:52.120 "compare_and_write": false, 00:18:52.120 "abort": true, 00:18:52.120 "seek_hole": false, 00:18:52.120 "seek_data": false, 00:18:52.120 "copy": true, 00:18:52.120 "nvme_iov_md": false 00:18:52.120 }, 00:18:52.120 "memory_domains": [ 00:18:52.120 { 00:18:52.120 "dma_device_id": "system", 00:18:52.120 "dma_device_type": 1 00:18:52.120 }, 00:18:52.120 { 00:18:52.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.120 "dma_device_type": 2 00:18:52.120 } 00:18:52.120 ], 00:18:52.120 "driver_specific": {} 00:18:52.120 } 00:18:52.120 ] 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.120 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.120 "name": "Existed_Raid", 00:18:52.120 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:52.120 "strip_size_kb": 64, 00:18:52.120 "state": "configuring", 00:18:52.120 "raid_level": "raid5f", 00:18:52.120 "superblock": true, 00:18:52.120 "num_base_bdevs": 4, 00:18:52.120 "num_base_bdevs_discovered": 3, 00:18:52.120 "num_base_bdevs_operational": 4, 00:18:52.120 "base_bdevs_list": [ 00:18:52.120 { 00:18:52.120 "name": "BaseBdev1", 00:18:52.120 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:52.120 "is_configured": true, 00:18:52.120 "data_offset": 2048, 00:18:52.120 "data_size": 63488 00:18:52.120 }, 00:18:52.120 { 00:18:52.120 "name": "BaseBdev2", 00:18:52.120 "uuid": "bc34e7c2-5df5-4008-abff-427b09cc0379", 00:18:52.120 "is_configured": true, 00:18:52.120 "data_offset": 2048, 00:18:52.120 "data_size": 63488 00:18:52.120 }, 00:18:52.120 { 00:18:52.120 "name": "BaseBdev3", 00:18:52.120 "uuid": "b886f297-80e8-4028-80a8-f84f94077f79", 00:18:52.120 "is_configured": true, 00:18:52.120 "data_offset": 2048, 00:18:52.120 "data_size": 63488 00:18:52.120 }, 00:18:52.120 { 00:18:52.120 "name": "BaseBdev4", 00:18:52.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.120 "is_configured": false, 00:18:52.120 "data_offset": 0, 00:18:52.120 "data_size": 0 00:18:52.120 } 00:18:52.120 ] 00:18:52.120 }' 00:18:52.121 08:52:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.121 08:52:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.688 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:52.688 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.688 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.688 [2024-11-20 08:52:23.438692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:52.688 [2024-11-20 08:52:23.439062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:52.688 [2024-11-20 08:52:23.439084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:52.688 BaseBdev4 00:18:52.688 [2024-11-20 08:52:23.439438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.689 [2024-11-20 08:52:23.446415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:52.689 [2024-11-20 08:52:23.446448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:52.689 [2024-11-20 08:52:23.446766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.689 [ 00:18:52.689 { 00:18:52.689 "name": "BaseBdev4", 00:18:52.689 "aliases": [ 00:18:52.689 "77b8d6f7-7f9b-4d28-9090-5ef240d986f0" 00:18:52.689 ], 00:18:52.689 "product_name": "Malloc disk", 00:18:52.689 "block_size": 512, 00:18:52.689 "num_blocks": 65536, 00:18:52.689 "uuid": "77b8d6f7-7f9b-4d28-9090-5ef240d986f0", 00:18:52.689 "assigned_rate_limits": { 00:18:52.689 "rw_ios_per_sec": 0, 00:18:52.689 "rw_mbytes_per_sec": 0, 00:18:52.689 "r_mbytes_per_sec": 0, 00:18:52.689 "w_mbytes_per_sec": 0 00:18:52.689 }, 00:18:52.689 "claimed": true, 00:18:52.689 "claim_type": "exclusive_write", 00:18:52.689 "zoned": false, 00:18:52.689 "supported_io_types": { 00:18:52.689 "read": true, 00:18:52.689 "write": true, 00:18:52.689 "unmap": true, 00:18:52.689 "flush": true, 00:18:52.689 "reset": true, 00:18:52.689 "nvme_admin": false, 00:18:52.689 "nvme_io": false, 00:18:52.689 "nvme_io_md": false, 00:18:52.689 "write_zeroes": true, 00:18:52.689 "zcopy": true, 00:18:52.689 "get_zone_info": false, 00:18:52.689 "zone_management": false, 00:18:52.689 "zone_append": false, 00:18:52.689 "compare": false, 00:18:52.689 "compare_and_write": false, 00:18:52.689 "abort": true, 00:18:52.689 "seek_hole": false, 00:18:52.689 "seek_data": false, 00:18:52.689 "copy": true, 00:18:52.689 "nvme_iov_md": false 00:18:52.689 }, 00:18:52.689 "memory_domains": [ 00:18:52.689 { 00:18:52.689 "dma_device_id": "system", 00:18:52.689 "dma_device_type": 1 00:18:52.689 }, 00:18:52.689 { 00:18:52.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.689 "dma_device_type": 2 00:18:52.689 } 00:18:52.689 ], 00:18:52.689 "driver_specific": {} 00:18:52.689 } 00:18:52.689 ] 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.689 "name": "Existed_Raid", 00:18:52.689 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:52.689 "strip_size_kb": 64, 00:18:52.689 "state": "online", 00:18:52.689 "raid_level": "raid5f", 00:18:52.689 "superblock": true, 00:18:52.689 "num_base_bdevs": 4, 00:18:52.689 "num_base_bdevs_discovered": 4, 00:18:52.689 "num_base_bdevs_operational": 4, 00:18:52.689 "base_bdevs_list": [ 00:18:52.689 { 00:18:52.689 "name": "BaseBdev1", 00:18:52.689 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:52.689 "is_configured": true, 00:18:52.689 "data_offset": 2048, 00:18:52.689 "data_size": 63488 00:18:52.689 }, 00:18:52.689 { 00:18:52.689 "name": "BaseBdev2", 00:18:52.689 "uuid": "bc34e7c2-5df5-4008-abff-427b09cc0379", 00:18:52.689 "is_configured": true, 00:18:52.689 "data_offset": 2048, 00:18:52.689 "data_size": 63488 00:18:52.689 }, 00:18:52.689 { 00:18:52.689 "name": "BaseBdev3", 00:18:52.689 "uuid": "b886f297-80e8-4028-80a8-f84f94077f79", 00:18:52.689 "is_configured": true, 00:18:52.689 "data_offset": 2048, 00:18:52.689 "data_size": 63488 00:18:52.689 }, 00:18:52.689 { 00:18:52.689 "name": "BaseBdev4", 00:18:52.689 "uuid": "77b8d6f7-7f9b-4d28-9090-5ef240d986f0", 00:18:52.689 "is_configured": true, 00:18:52.689 "data_offset": 2048, 00:18:52.689 "data_size": 63488 00:18:52.689 } 00:18:52.689 ] 00:18:52.689 }' 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.689 08:52:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:53.256 [2024-11-20 08:52:24.018596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.256 "name": "Existed_Raid", 00:18:53.256 "aliases": [ 00:18:53.256 "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db" 00:18:53.256 ], 00:18:53.256 "product_name": "Raid Volume", 00:18:53.256 "block_size": 512, 00:18:53.256 "num_blocks": 190464, 00:18:53.256 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:53.256 "assigned_rate_limits": { 00:18:53.256 "rw_ios_per_sec": 0, 00:18:53.256 "rw_mbytes_per_sec": 0, 00:18:53.256 "r_mbytes_per_sec": 0, 00:18:53.256 "w_mbytes_per_sec": 0 00:18:53.256 }, 00:18:53.256 "claimed": false, 00:18:53.256 "zoned": false, 00:18:53.256 "supported_io_types": { 00:18:53.256 "read": true, 00:18:53.256 "write": true, 00:18:53.256 "unmap": false, 00:18:53.256 "flush": false, 00:18:53.256 "reset": true, 00:18:53.256 "nvme_admin": false, 00:18:53.256 "nvme_io": false, 00:18:53.256 "nvme_io_md": false, 00:18:53.256 "write_zeroes": true, 00:18:53.256 "zcopy": false, 00:18:53.256 "get_zone_info": false, 00:18:53.256 "zone_management": false, 00:18:53.256 "zone_append": false, 00:18:53.256 "compare": false, 00:18:53.256 "compare_and_write": false, 00:18:53.256 "abort": false, 00:18:53.256 "seek_hole": false, 00:18:53.256 "seek_data": false, 00:18:53.256 "copy": false, 00:18:53.256 "nvme_iov_md": false 00:18:53.256 }, 00:18:53.256 "driver_specific": { 00:18:53.256 "raid": { 00:18:53.256 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:53.256 "strip_size_kb": 64, 00:18:53.256 "state": "online", 00:18:53.256 "raid_level": "raid5f", 00:18:53.256 "superblock": true, 00:18:53.256 "num_base_bdevs": 4, 00:18:53.256 "num_base_bdevs_discovered": 4, 00:18:53.256 "num_base_bdevs_operational": 4, 00:18:53.256 "base_bdevs_list": [ 00:18:53.256 { 00:18:53.256 "name": "BaseBdev1", 00:18:53.256 "uuid": "3ace28d5-412c-47da-9433-5cafd8702e95", 00:18:53.256 "is_configured": true, 00:18:53.256 "data_offset": 2048, 00:18:53.256 "data_size": 63488 00:18:53.256 }, 00:18:53.256 { 00:18:53.256 "name": "BaseBdev2", 00:18:53.256 "uuid": "bc34e7c2-5df5-4008-abff-427b09cc0379", 00:18:53.256 "is_configured": true, 00:18:53.256 "data_offset": 2048, 00:18:53.256 "data_size": 63488 00:18:53.256 }, 00:18:53.256 { 00:18:53.256 "name": "BaseBdev3", 00:18:53.256 "uuid": "b886f297-80e8-4028-80a8-f84f94077f79", 00:18:53.256 "is_configured": true, 00:18:53.256 "data_offset": 2048, 00:18:53.256 "data_size": 63488 00:18:53.256 }, 00:18:53.256 { 00:18:53.256 "name": "BaseBdev4", 00:18:53.256 "uuid": "77b8d6f7-7f9b-4d28-9090-5ef240d986f0", 00:18:53.256 "is_configured": true, 00:18:53.256 "data_offset": 2048, 00:18:53.256 "data_size": 63488 00:18:53.256 } 00:18:53.256 ] 00:18:53.256 } 00:18:53.256 } 00:18:53.256 }' 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:53.256 BaseBdev2 00:18:53.256 BaseBdev3 00:18:53.256 BaseBdev4' 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.256 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.515 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.515 [2024-11-20 08:52:24.370439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.774 "name": "Existed_Raid", 00:18:53.774 "uuid": "f98aa3e9-ab73-4ea5-ac08-e036ba97c9db", 00:18:53.774 "strip_size_kb": 64, 00:18:53.774 "state": "online", 00:18:53.774 "raid_level": "raid5f", 00:18:53.774 "superblock": true, 00:18:53.774 "num_base_bdevs": 4, 00:18:53.774 "num_base_bdevs_discovered": 3, 00:18:53.774 "num_base_bdevs_operational": 3, 00:18:53.774 "base_bdevs_list": [ 00:18:53.774 { 00:18:53.774 "name": null, 00:18:53.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.774 "is_configured": false, 00:18:53.774 "data_offset": 0, 00:18:53.774 "data_size": 63488 00:18:53.774 }, 00:18:53.774 { 00:18:53.774 "name": "BaseBdev2", 00:18:53.774 "uuid": "bc34e7c2-5df5-4008-abff-427b09cc0379", 00:18:53.774 "is_configured": true, 00:18:53.774 "data_offset": 2048, 00:18:53.774 "data_size": 63488 00:18:53.774 }, 00:18:53.774 { 00:18:53.774 "name": "BaseBdev3", 00:18:53.774 "uuid": "b886f297-80e8-4028-80a8-f84f94077f79", 00:18:53.774 "is_configured": true, 00:18:53.774 "data_offset": 2048, 00:18:53.774 "data_size": 63488 00:18:53.774 }, 00:18:53.774 { 00:18:53.774 "name": "BaseBdev4", 00:18:53.774 "uuid": "77b8d6f7-7f9b-4d28-9090-5ef240d986f0", 00:18:53.774 "is_configured": true, 00:18:53.774 "data_offset": 2048, 00:18:53.774 "data_size": 63488 00:18:53.774 } 00:18:53.774 ] 00:18:53.774 }' 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.774 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.342 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:54.342 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:54.342 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.342 08:52:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:54.342 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.342 08:52:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.342 [2024-11-20 08:52:25.056905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:54.342 [2024-11-20 08:52:25.057303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.342 [2024-11-20 08:52:25.140307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.342 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.342 [2024-11-20 08:52:25.192388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.602 [2024-11-20 08:52:25.330758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:54.602 [2024-11-20 08:52:25.330814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.602 BaseBdev2 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.602 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.862 [ 00:18:54.862 { 00:18:54.862 "name": "BaseBdev2", 00:18:54.862 "aliases": [ 00:18:54.862 "0c93ccf7-17cc-4e82-b853-deafd9415802" 00:18:54.862 ], 00:18:54.862 "product_name": "Malloc disk", 00:18:54.862 "block_size": 512, 00:18:54.862 "num_blocks": 65536, 00:18:54.862 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:54.862 "assigned_rate_limits": { 00:18:54.862 "rw_ios_per_sec": 0, 00:18:54.862 "rw_mbytes_per_sec": 0, 00:18:54.862 "r_mbytes_per_sec": 0, 00:18:54.862 "w_mbytes_per_sec": 0 00:18:54.862 }, 00:18:54.862 "claimed": false, 00:18:54.862 "zoned": false, 00:18:54.862 "supported_io_types": { 00:18:54.862 "read": true, 00:18:54.862 "write": true, 00:18:54.862 "unmap": true, 00:18:54.862 "flush": true, 00:18:54.862 "reset": true, 00:18:54.862 "nvme_admin": false, 00:18:54.862 "nvme_io": false, 00:18:54.862 "nvme_io_md": false, 00:18:54.862 "write_zeroes": true, 00:18:54.862 "zcopy": true, 00:18:54.862 "get_zone_info": false, 00:18:54.862 "zone_management": false, 00:18:54.862 "zone_append": false, 00:18:54.862 "compare": false, 00:18:54.862 "compare_and_write": false, 00:18:54.862 "abort": true, 00:18:54.862 "seek_hole": false, 00:18:54.862 "seek_data": false, 00:18:54.862 "copy": true, 00:18:54.862 "nvme_iov_md": false 00:18:54.862 }, 00:18:54.862 "memory_domains": [ 00:18:54.862 { 00:18:54.862 "dma_device_id": "system", 00:18:54.862 "dma_device_type": 1 00:18:54.862 }, 00:18:54.862 { 00:18:54.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.862 "dma_device_type": 2 00:18:54.862 } 00:18:54.862 ], 00:18:54.862 "driver_specific": {} 00:18:54.862 } 00:18:54.862 ] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.862 BaseBdev3 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.862 [ 00:18:54.862 { 00:18:54.862 "name": "BaseBdev3", 00:18:54.862 "aliases": [ 00:18:54.862 "752a34f5-1a11-4738-a04b-ea6e0c177887" 00:18:54.862 ], 00:18:54.862 "product_name": "Malloc disk", 00:18:54.862 "block_size": 512, 00:18:54.862 "num_blocks": 65536, 00:18:54.862 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:54.862 "assigned_rate_limits": { 00:18:54.862 "rw_ios_per_sec": 0, 00:18:54.862 "rw_mbytes_per_sec": 0, 00:18:54.862 "r_mbytes_per_sec": 0, 00:18:54.862 "w_mbytes_per_sec": 0 00:18:54.862 }, 00:18:54.862 "claimed": false, 00:18:54.862 "zoned": false, 00:18:54.862 "supported_io_types": { 00:18:54.862 "read": true, 00:18:54.862 "write": true, 00:18:54.862 "unmap": true, 00:18:54.862 "flush": true, 00:18:54.862 "reset": true, 00:18:54.862 "nvme_admin": false, 00:18:54.862 "nvme_io": false, 00:18:54.862 "nvme_io_md": false, 00:18:54.862 "write_zeroes": true, 00:18:54.862 "zcopy": true, 00:18:54.862 "get_zone_info": false, 00:18:54.862 "zone_management": false, 00:18:54.862 "zone_append": false, 00:18:54.862 "compare": false, 00:18:54.862 "compare_and_write": false, 00:18:54.862 "abort": true, 00:18:54.862 "seek_hole": false, 00:18:54.862 "seek_data": false, 00:18:54.862 "copy": true, 00:18:54.862 "nvme_iov_md": false 00:18:54.862 }, 00:18:54.862 "memory_domains": [ 00:18:54.862 { 00:18:54.862 "dma_device_id": "system", 00:18:54.862 "dma_device_type": 1 00:18:54.862 }, 00:18:54.862 { 00:18:54.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.862 "dma_device_type": 2 00:18:54.862 } 00:18:54.862 ], 00:18:54.862 "driver_specific": {} 00:18:54.862 } 00:18:54.862 ] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.862 BaseBdev4 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:54.862 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.863 [ 00:18:54.863 { 00:18:54.863 "name": "BaseBdev4", 00:18:54.863 "aliases": [ 00:18:54.863 "a6f7ff31-7747-4375-a6ba-a3c55f5545ae" 00:18:54.863 ], 00:18:54.863 "product_name": "Malloc disk", 00:18:54.863 "block_size": 512, 00:18:54.863 "num_blocks": 65536, 00:18:54.863 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:54.863 "assigned_rate_limits": { 00:18:54.863 "rw_ios_per_sec": 0, 00:18:54.863 "rw_mbytes_per_sec": 0, 00:18:54.863 "r_mbytes_per_sec": 0, 00:18:54.863 "w_mbytes_per_sec": 0 00:18:54.863 }, 00:18:54.863 "claimed": false, 00:18:54.863 "zoned": false, 00:18:54.863 "supported_io_types": { 00:18:54.863 "read": true, 00:18:54.863 "write": true, 00:18:54.863 "unmap": true, 00:18:54.863 "flush": true, 00:18:54.863 "reset": true, 00:18:54.863 "nvme_admin": false, 00:18:54.863 "nvme_io": false, 00:18:54.863 "nvme_io_md": false, 00:18:54.863 "write_zeroes": true, 00:18:54.863 "zcopy": true, 00:18:54.863 "get_zone_info": false, 00:18:54.863 "zone_management": false, 00:18:54.863 "zone_append": false, 00:18:54.863 "compare": false, 00:18:54.863 "compare_and_write": false, 00:18:54.863 "abort": true, 00:18:54.863 "seek_hole": false, 00:18:54.863 "seek_data": false, 00:18:54.863 "copy": true, 00:18:54.863 "nvme_iov_md": false 00:18:54.863 }, 00:18:54.863 "memory_domains": [ 00:18:54.863 { 00:18:54.863 "dma_device_id": "system", 00:18:54.863 "dma_device_type": 1 00:18:54.863 }, 00:18:54.863 { 00:18:54.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.863 "dma_device_type": 2 00:18:54.863 } 00:18:54.863 ], 00:18:54.863 "driver_specific": {} 00:18:54.863 } 00:18:54.863 ] 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.863 [2024-11-20 08:52:25.690198] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.863 [2024-11-20 08:52:25.690256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.863 [2024-11-20 08:52:25.690290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.863 [2024-11-20 08:52:25.692673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.863 [2024-11-20 08:52:25.692745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.863 "name": "Existed_Raid", 00:18:54.863 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:54.863 "strip_size_kb": 64, 00:18:54.863 "state": "configuring", 00:18:54.863 "raid_level": "raid5f", 00:18:54.863 "superblock": true, 00:18:54.863 "num_base_bdevs": 4, 00:18:54.863 "num_base_bdevs_discovered": 3, 00:18:54.863 "num_base_bdevs_operational": 4, 00:18:54.863 "base_bdevs_list": [ 00:18:54.863 { 00:18:54.863 "name": "BaseBdev1", 00:18:54.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.863 "is_configured": false, 00:18:54.863 "data_offset": 0, 00:18:54.863 "data_size": 0 00:18:54.863 }, 00:18:54.863 { 00:18:54.863 "name": "BaseBdev2", 00:18:54.863 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:54.863 "is_configured": true, 00:18:54.863 "data_offset": 2048, 00:18:54.863 "data_size": 63488 00:18:54.863 }, 00:18:54.863 { 00:18:54.863 "name": "BaseBdev3", 00:18:54.863 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:54.863 "is_configured": true, 00:18:54.863 "data_offset": 2048, 00:18:54.863 "data_size": 63488 00:18:54.863 }, 00:18:54.863 { 00:18:54.863 "name": "BaseBdev4", 00:18:54.863 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:54.863 "is_configured": true, 00:18:54.863 "data_offset": 2048, 00:18:54.863 "data_size": 63488 00:18:54.863 } 00:18:54.863 ] 00:18:54.863 }' 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.863 08:52:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.433 [2024-11-20 08:52:26.226325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.433 "name": "Existed_Raid", 00:18:55.433 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:55.433 "strip_size_kb": 64, 00:18:55.433 "state": "configuring", 00:18:55.433 "raid_level": "raid5f", 00:18:55.433 "superblock": true, 00:18:55.433 "num_base_bdevs": 4, 00:18:55.433 "num_base_bdevs_discovered": 2, 00:18:55.433 "num_base_bdevs_operational": 4, 00:18:55.433 "base_bdevs_list": [ 00:18:55.433 { 00:18:55.433 "name": "BaseBdev1", 00:18:55.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.433 "is_configured": false, 00:18:55.433 "data_offset": 0, 00:18:55.433 "data_size": 0 00:18:55.433 }, 00:18:55.433 { 00:18:55.433 "name": null, 00:18:55.433 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:55.433 "is_configured": false, 00:18:55.433 "data_offset": 0, 00:18:55.433 "data_size": 63488 00:18:55.433 }, 00:18:55.433 { 00:18:55.433 "name": "BaseBdev3", 00:18:55.433 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:55.433 "is_configured": true, 00:18:55.433 "data_offset": 2048, 00:18:55.433 "data_size": 63488 00:18:55.433 }, 00:18:55.433 { 00:18:55.433 "name": "BaseBdev4", 00:18:55.433 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:55.433 "is_configured": true, 00:18:55.433 "data_offset": 2048, 00:18:55.433 "data_size": 63488 00:18:55.433 } 00:18:55.433 ] 00:18:55.433 }' 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.433 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 [2024-11-20 08:52:26.841656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.003 BaseBdev1 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 [ 00:18:56.003 { 00:18:56.003 "name": "BaseBdev1", 00:18:56.003 "aliases": [ 00:18:56.003 "34450d67-8a31-4407-a397-4a33aad1e608" 00:18:56.003 ], 00:18:56.003 "product_name": "Malloc disk", 00:18:56.003 "block_size": 512, 00:18:56.003 "num_blocks": 65536, 00:18:56.003 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:56.003 "assigned_rate_limits": { 00:18:56.003 "rw_ios_per_sec": 0, 00:18:56.003 "rw_mbytes_per_sec": 0, 00:18:56.003 "r_mbytes_per_sec": 0, 00:18:56.003 "w_mbytes_per_sec": 0 00:18:56.003 }, 00:18:56.003 "claimed": true, 00:18:56.003 "claim_type": "exclusive_write", 00:18:56.003 "zoned": false, 00:18:56.003 "supported_io_types": { 00:18:56.003 "read": true, 00:18:56.003 "write": true, 00:18:56.003 "unmap": true, 00:18:56.003 "flush": true, 00:18:56.003 "reset": true, 00:18:56.003 "nvme_admin": false, 00:18:56.003 "nvme_io": false, 00:18:56.003 "nvme_io_md": false, 00:18:56.003 "write_zeroes": true, 00:18:56.003 "zcopy": true, 00:18:56.003 "get_zone_info": false, 00:18:56.003 "zone_management": false, 00:18:56.003 "zone_append": false, 00:18:56.003 "compare": false, 00:18:56.003 "compare_and_write": false, 00:18:56.003 "abort": true, 00:18:56.003 "seek_hole": false, 00:18:56.003 "seek_data": false, 00:18:56.003 "copy": true, 00:18:56.003 "nvme_iov_md": false 00:18:56.003 }, 00:18:56.003 "memory_domains": [ 00:18:56.003 { 00:18:56.003 "dma_device_id": "system", 00:18:56.003 "dma_device_type": 1 00:18:56.003 }, 00:18:56.003 { 00:18:56.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.003 "dma_device_type": 2 00:18:56.003 } 00:18:56.003 ], 00:18:56.003 "driver_specific": {} 00:18:56.003 } 00:18:56.003 ] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.003 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.262 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.262 "name": "Existed_Raid", 00:18:56.262 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:56.262 "strip_size_kb": 64, 00:18:56.262 "state": "configuring", 00:18:56.262 "raid_level": "raid5f", 00:18:56.262 "superblock": true, 00:18:56.262 "num_base_bdevs": 4, 00:18:56.262 "num_base_bdevs_discovered": 3, 00:18:56.262 "num_base_bdevs_operational": 4, 00:18:56.263 "base_bdevs_list": [ 00:18:56.263 { 00:18:56.263 "name": "BaseBdev1", 00:18:56.263 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:56.263 "is_configured": true, 00:18:56.263 "data_offset": 2048, 00:18:56.263 "data_size": 63488 00:18:56.263 }, 00:18:56.263 { 00:18:56.263 "name": null, 00:18:56.263 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:56.263 "is_configured": false, 00:18:56.263 "data_offset": 0, 00:18:56.263 "data_size": 63488 00:18:56.263 }, 00:18:56.263 { 00:18:56.263 "name": "BaseBdev3", 00:18:56.263 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:56.263 "is_configured": true, 00:18:56.263 "data_offset": 2048, 00:18:56.263 "data_size": 63488 00:18:56.263 }, 00:18:56.263 { 00:18:56.263 "name": "BaseBdev4", 00:18:56.263 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:56.263 "is_configured": true, 00:18:56.263 "data_offset": 2048, 00:18:56.263 "data_size": 63488 00:18:56.263 } 00:18:56.263 ] 00:18:56.263 }' 00:18:56.263 08:52:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.263 08:52:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.522 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.781 [2024-11-20 08:52:27.437914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:56.781 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.781 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.782 "name": "Existed_Raid", 00:18:56.782 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:56.782 "strip_size_kb": 64, 00:18:56.782 "state": "configuring", 00:18:56.782 "raid_level": "raid5f", 00:18:56.782 "superblock": true, 00:18:56.782 "num_base_bdevs": 4, 00:18:56.782 "num_base_bdevs_discovered": 2, 00:18:56.782 "num_base_bdevs_operational": 4, 00:18:56.782 "base_bdevs_list": [ 00:18:56.782 { 00:18:56.782 "name": "BaseBdev1", 00:18:56.782 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:56.782 "is_configured": true, 00:18:56.782 "data_offset": 2048, 00:18:56.782 "data_size": 63488 00:18:56.782 }, 00:18:56.782 { 00:18:56.782 "name": null, 00:18:56.782 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:56.782 "is_configured": false, 00:18:56.782 "data_offset": 0, 00:18:56.782 "data_size": 63488 00:18:56.782 }, 00:18:56.782 { 00:18:56.782 "name": null, 00:18:56.782 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:56.782 "is_configured": false, 00:18:56.782 "data_offset": 0, 00:18:56.782 "data_size": 63488 00:18:56.782 }, 00:18:56.782 { 00:18:56.782 "name": "BaseBdev4", 00:18:56.782 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:56.782 "is_configured": true, 00:18:56.782 "data_offset": 2048, 00:18:56.782 "data_size": 63488 00:18:56.782 } 00:18:56.782 ] 00:18:56.782 }' 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.782 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.040 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:57.040 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.040 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.040 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.299 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.299 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:57.299 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:57.299 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.299 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.299 [2024-11-20 08:52:28.002053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.299 "name": "Existed_Raid", 00:18:57.299 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:57.299 "strip_size_kb": 64, 00:18:57.299 "state": "configuring", 00:18:57.299 "raid_level": "raid5f", 00:18:57.299 "superblock": true, 00:18:57.299 "num_base_bdevs": 4, 00:18:57.299 "num_base_bdevs_discovered": 3, 00:18:57.299 "num_base_bdevs_operational": 4, 00:18:57.299 "base_bdevs_list": [ 00:18:57.299 { 00:18:57.299 "name": "BaseBdev1", 00:18:57.299 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:57.299 "is_configured": true, 00:18:57.299 "data_offset": 2048, 00:18:57.299 "data_size": 63488 00:18:57.299 }, 00:18:57.299 { 00:18:57.299 "name": null, 00:18:57.299 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:57.299 "is_configured": false, 00:18:57.299 "data_offset": 0, 00:18:57.299 "data_size": 63488 00:18:57.299 }, 00:18:57.299 { 00:18:57.299 "name": "BaseBdev3", 00:18:57.299 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:57.299 "is_configured": true, 00:18:57.299 "data_offset": 2048, 00:18:57.299 "data_size": 63488 00:18:57.299 }, 00:18:57.299 { 00:18:57.299 "name": "BaseBdev4", 00:18:57.299 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:57.299 "is_configured": true, 00:18:57.299 "data_offset": 2048, 00:18:57.299 "data_size": 63488 00:18:57.299 } 00:18:57.299 ] 00:18:57.299 }' 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.299 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.873 [2024-11-20 08:52:28.586280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.873 "name": "Existed_Raid", 00:18:57.873 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:57.873 "strip_size_kb": 64, 00:18:57.873 "state": "configuring", 00:18:57.873 "raid_level": "raid5f", 00:18:57.873 "superblock": true, 00:18:57.873 "num_base_bdevs": 4, 00:18:57.873 "num_base_bdevs_discovered": 2, 00:18:57.873 "num_base_bdevs_operational": 4, 00:18:57.873 "base_bdevs_list": [ 00:18:57.873 { 00:18:57.873 "name": null, 00:18:57.873 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:57.873 "is_configured": false, 00:18:57.873 "data_offset": 0, 00:18:57.873 "data_size": 63488 00:18:57.873 }, 00:18:57.873 { 00:18:57.873 "name": null, 00:18:57.873 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:57.873 "is_configured": false, 00:18:57.873 "data_offset": 0, 00:18:57.873 "data_size": 63488 00:18:57.873 }, 00:18:57.873 { 00:18:57.873 "name": "BaseBdev3", 00:18:57.873 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:57.873 "is_configured": true, 00:18:57.873 "data_offset": 2048, 00:18:57.873 "data_size": 63488 00:18:57.873 }, 00:18:57.873 { 00:18:57.873 "name": "BaseBdev4", 00:18:57.873 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:57.873 "is_configured": true, 00:18:57.873 "data_offset": 2048, 00:18:57.873 "data_size": 63488 00:18:57.873 } 00:18:57.873 ] 00:18:57.873 }' 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.873 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.475 [2024-11-20 08:52:29.219728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.475 "name": "Existed_Raid", 00:18:58.475 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:58.475 "strip_size_kb": 64, 00:18:58.475 "state": "configuring", 00:18:58.475 "raid_level": "raid5f", 00:18:58.475 "superblock": true, 00:18:58.475 "num_base_bdevs": 4, 00:18:58.475 "num_base_bdevs_discovered": 3, 00:18:58.475 "num_base_bdevs_operational": 4, 00:18:58.475 "base_bdevs_list": [ 00:18:58.475 { 00:18:58.475 "name": null, 00:18:58.475 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:58.475 "is_configured": false, 00:18:58.475 "data_offset": 0, 00:18:58.475 "data_size": 63488 00:18:58.475 }, 00:18:58.475 { 00:18:58.475 "name": "BaseBdev2", 00:18:58.475 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:58.475 "is_configured": true, 00:18:58.475 "data_offset": 2048, 00:18:58.475 "data_size": 63488 00:18:58.475 }, 00:18:58.475 { 00:18:58.475 "name": "BaseBdev3", 00:18:58.475 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:58.475 "is_configured": true, 00:18:58.475 "data_offset": 2048, 00:18:58.475 "data_size": 63488 00:18:58.475 }, 00:18:58.475 { 00:18:58.475 "name": "BaseBdev4", 00:18:58.475 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:58.475 "is_configured": true, 00:18:58.475 "data_offset": 2048, 00:18:58.475 "data_size": 63488 00:18:58.475 } 00:18:58.475 ] 00:18:58.475 }' 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.475 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 34450d67-8a31-4407-a397-4a33aad1e608 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 [2024-11-20 08:52:29.850683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:59.043 [2024-11-20 08:52:29.851021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:59.043 [2024-11-20 08:52:29.851041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:59.043 [2024-11-20 08:52:29.851393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:59.043 NewBaseBdev 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 [2024-11-20 08:52:29.857831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:59.043 [2024-11-20 08:52:29.857867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:59.043 [2024-11-20 08:52:29.858215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.043 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 [ 00:18:59.043 { 00:18:59.043 "name": "NewBaseBdev", 00:18:59.043 "aliases": [ 00:18:59.043 "34450d67-8a31-4407-a397-4a33aad1e608" 00:18:59.043 ], 00:18:59.043 "product_name": "Malloc disk", 00:18:59.044 "block_size": 512, 00:18:59.044 "num_blocks": 65536, 00:18:59.044 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:59.044 "assigned_rate_limits": { 00:18:59.044 "rw_ios_per_sec": 0, 00:18:59.044 "rw_mbytes_per_sec": 0, 00:18:59.044 "r_mbytes_per_sec": 0, 00:18:59.044 "w_mbytes_per_sec": 0 00:18:59.044 }, 00:18:59.044 "claimed": true, 00:18:59.044 "claim_type": "exclusive_write", 00:18:59.044 "zoned": false, 00:18:59.044 "supported_io_types": { 00:18:59.044 "read": true, 00:18:59.044 "write": true, 00:18:59.044 "unmap": true, 00:18:59.044 "flush": true, 00:18:59.044 "reset": true, 00:18:59.044 "nvme_admin": false, 00:18:59.044 "nvme_io": false, 00:18:59.044 "nvme_io_md": false, 00:18:59.044 "write_zeroes": true, 00:18:59.044 "zcopy": true, 00:18:59.044 "get_zone_info": false, 00:18:59.044 "zone_management": false, 00:18:59.044 "zone_append": false, 00:18:59.044 "compare": false, 00:18:59.044 "compare_and_write": false, 00:18:59.044 "abort": true, 00:18:59.044 "seek_hole": false, 00:18:59.044 "seek_data": false, 00:18:59.044 "copy": true, 00:18:59.044 "nvme_iov_md": false 00:18:59.044 }, 00:18:59.044 "memory_domains": [ 00:18:59.044 { 00:18:59.044 "dma_device_id": "system", 00:18:59.044 "dma_device_type": 1 00:18:59.044 }, 00:18:59.044 { 00:18:59.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.044 "dma_device_type": 2 00:18:59.044 } 00:18:59.044 ], 00:18:59.044 "driver_specific": {} 00:18:59.044 } 00:18:59.044 ] 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.044 "name": "Existed_Raid", 00:18:59.044 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:59.044 "strip_size_kb": 64, 00:18:59.044 "state": "online", 00:18:59.044 "raid_level": "raid5f", 00:18:59.044 "superblock": true, 00:18:59.044 "num_base_bdevs": 4, 00:18:59.044 "num_base_bdevs_discovered": 4, 00:18:59.044 "num_base_bdevs_operational": 4, 00:18:59.044 "base_bdevs_list": [ 00:18:59.044 { 00:18:59.044 "name": "NewBaseBdev", 00:18:59.044 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:59.044 "is_configured": true, 00:18:59.044 "data_offset": 2048, 00:18:59.044 "data_size": 63488 00:18:59.044 }, 00:18:59.044 { 00:18:59.044 "name": "BaseBdev2", 00:18:59.044 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:59.044 "is_configured": true, 00:18:59.044 "data_offset": 2048, 00:18:59.044 "data_size": 63488 00:18:59.044 }, 00:18:59.044 { 00:18:59.044 "name": "BaseBdev3", 00:18:59.044 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:59.044 "is_configured": true, 00:18:59.044 "data_offset": 2048, 00:18:59.044 "data_size": 63488 00:18:59.044 }, 00:18:59.044 { 00:18:59.044 "name": "BaseBdev4", 00:18:59.044 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:59.044 "is_configured": true, 00:18:59.044 "data_offset": 2048, 00:18:59.044 "data_size": 63488 00:18:59.044 } 00:18:59.044 ] 00:18:59.044 }' 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.044 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.612 [2024-11-20 08:52:30.434123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.612 "name": "Existed_Raid", 00:18:59.612 "aliases": [ 00:18:59.612 "432a425d-2444-4527-aeb0-6ad96379b682" 00:18:59.612 ], 00:18:59.612 "product_name": "Raid Volume", 00:18:59.612 "block_size": 512, 00:18:59.612 "num_blocks": 190464, 00:18:59.612 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:59.612 "assigned_rate_limits": { 00:18:59.612 "rw_ios_per_sec": 0, 00:18:59.612 "rw_mbytes_per_sec": 0, 00:18:59.612 "r_mbytes_per_sec": 0, 00:18:59.612 "w_mbytes_per_sec": 0 00:18:59.612 }, 00:18:59.612 "claimed": false, 00:18:59.612 "zoned": false, 00:18:59.612 "supported_io_types": { 00:18:59.612 "read": true, 00:18:59.612 "write": true, 00:18:59.612 "unmap": false, 00:18:59.612 "flush": false, 00:18:59.612 "reset": true, 00:18:59.612 "nvme_admin": false, 00:18:59.612 "nvme_io": false, 00:18:59.612 "nvme_io_md": false, 00:18:59.612 "write_zeroes": true, 00:18:59.612 "zcopy": false, 00:18:59.612 "get_zone_info": false, 00:18:59.612 "zone_management": false, 00:18:59.612 "zone_append": false, 00:18:59.612 "compare": false, 00:18:59.612 "compare_and_write": false, 00:18:59.612 "abort": false, 00:18:59.612 "seek_hole": false, 00:18:59.612 "seek_data": false, 00:18:59.612 "copy": false, 00:18:59.612 "nvme_iov_md": false 00:18:59.612 }, 00:18:59.612 "driver_specific": { 00:18:59.612 "raid": { 00:18:59.612 "uuid": "432a425d-2444-4527-aeb0-6ad96379b682", 00:18:59.612 "strip_size_kb": 64, 00:18:59.612 "state": "online", 00:18:59.612 "raid_level": "raid5f", 00:18:59.612 "superblock": true, 00:18:59.612 "num_base_bdevs": 4, 00:18:59.612 "num_base_bdevs_discovered": 4, 00:18:59.612 "num_base_bdevs_operational": 4, 00:18:59.612 "base_bdevs_list": [ 00:18:59.612 { 00:18:59.612 "name": "NewBaseBdev", 00:18:59.612 "uuid": "34450d67-8a31-4407-a397-4a33aad1e608", 00:18:59.612 "is_configured": true, 00:18:59.612 "data_offset": 2048, 00:18:59.612 "data_size": 63488 00:18:59.612 }, 00:18:59.612 { 00:18:59.612 "name": "BaseBdev2", 00:18:59.612 "uuid": "0c93ccf7-17cc-4e82-b853-deafd9415802", 00:18:59.612 "is_configured": true, 00:18:59.612 "data_offset": 2048, 00:18:59.612 "data_size": 63488 00:18:59.612 }, 00:18:59.612 { 00:18:59.612 "name": "BaseBdev3", 00:18:59.612 "uuid": "752a34f5-1a11-4738-a04b-ea6e0c177887", 00:18:59.612 "is_configured": true, 00:18:59.612 "data_offset": 2048, 00:18:59.612 "data_size": 63488 00:18:59.612 }, 00:18:59.612 { 00:18:59.612 "name": "BaseBdev4", 00:18:59.612 "uuid": "a6f7ff31-7747-4375-a6ba-a3c55f5545ae", 00:18:59.612 "is_configured": true, 00:18:59.612 "data_offset": 2048, 00:18:59.612 "data_size": 63488 00:18:59.612 } 00:18:59.612 ] 00:18:59.612 } 00:18:59.612 } 00:18:59.612 }' 00:18:59.612 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:59.871 BaseBdev2 00:18:59.871 BaseBdev3 00:18:59.871 BaseBdev4' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.871 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:59.872 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.872 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.872 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.872 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.130 [2024-11-20 08:52:30.805900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.130 [2024-11-20 08:52:30.806096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.130 [2024-11-20 08:52:30.806222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.130 [2024-11-20 08:52:30.806643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.130 [2024-11-20 08:52:30.806662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83843 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83843 ']' 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83843 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83843 00:19:00.130 killing process with pid 83843 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83843' 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83843 00:19:00.130 [2024-11-20 08:52:30.843935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.130 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83843 00:19:00.389 [2024-11-20 08:52:31.198020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.325 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:01.325 00:19:01.325 real 0m12.733s 00:19:01.325 user 0m21.201s 00:19:01.325 sys 0m1.744s 00:19:01.325 ************************************ 00:19:01.325 END TEST raid5f_state_function_test_sb 00:19:01.325 ************************************ 00:19:01.325 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.325 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.585 08:52:32 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:01.585 08:52:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:01.585 08:52:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.585 08:52:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.585 ************************************ 00:19:01.585 START TEST raid5f_superblock_test 00:19:01.585 ************************************ 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84518 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84518 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84518 ']' 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.585 08:52:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.585 [2024-11-20 08:52:32.379896] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:01.585 [2024-11-20 08:52:32.380293] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84518 ] 00:19:01.844 [2024-11-20 08:52:32.560894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.844 [2024-11-20 08:52:32.688992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.103 [2024-11-20 08:52:32.887950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.103 [2024-11-20 08:52:32.888021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.712 malloc1 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.712 [2024-11-20 08:52:33.399019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.712 [2024-11-20 08:52:33.399254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.712 [2024-11-20 08:52:33.399300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.712 [2024-11-20 08:52:33.399316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.712 [2024-11-20 08:52:33.402117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.712 [2024-11-20 08:52:33.402173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.712 pt1 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.712 malloc2 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.712 [2024-11-20 08:52:33.455520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.712 [2024-11-20 08:52:33.455623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.712 [2024-11-20 08:52:33.455656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:02.712 [2024-11-20 08:52:33.455671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.712 [2024-11-20 08:52:33.458423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.712 [2024-11-20 08:52:33.458471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.712 pt2 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.712 malloc3 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.712 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.712 [2024-11-20 08:52:33.524114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:02.713 [2024-11-20 08:52:33.524200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.713 [2024-11-20 08:52:33.524235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:02.713 [2024-11-20 08:52:33.524251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.713 [2024-11-20 08:52:33.527076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.713 [2024-11-20 08:52:33.527125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:02.713 pt3 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.713 malloc4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.713 [2024-11-20 08:52:33.580294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:02.713 [2024-11-20 08:52:33.580516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.713 [2024-11-20 08:52:33.580562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:02.713 [2024-11-20 08:52:33.580578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.713 [2024-11-20 08:52:33.583518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.713 [2024-11-20 08:52:33.583724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:02.713 pt4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.713 [2024-11-20 08:52:33.592535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.713 [2024-11-20 08:52:33.595281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.713 [2024-11-20 08:52:33.595522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:02.713 [2024-11-20 08:52:33.595742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:02.713 [2024-11-20 08:52:33.596192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:02.713 [2024-11-20 08:52:33.596360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:02.713 [2024-11-20 08:52:33.596765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:02.713 [2024-11-20 08:52:33.603735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:02.713 [2024-11-20 08:52:33.603768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:02.713 [2024-11-20 08:52:33.604093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.713 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.972 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.972 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.972 "name": "raid_bdev1", 00:19:02.972 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:02.972 "strip_size_kb": 64, 00:19:02.972 "state": "online", 00:19:02.972 "raid_level": "raid5f", 00:19:02.972 "superblock": true, 00:19:02.972 "num_base_bdevs": 4, 00:19:02.972 "num_base_bdevs_discovered": 4, 00:19:02.972 "num_base_bdevs_operational": 4, 00:19:02.972 "base_bdevs_list": [ 00:19:02.972 { 00:19:02.972 "name": "pt1", 00:19:02.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.972 "is_configured": true, 00:19:02.972 "data_offset": 2048, 00:19:02.972 "data_size": 63488 00:19:02.972 }, 00:19:02.972 { 00:19:02.972 "name": "pt2", 00:19:02.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.972 "is_configured": true, 00:19:02.972 "data_offset": 2048, 00:19:02.972 "data_size": 63488 00:19:02.972 }, 00:19:02.972 { 00:19:02.972 "name": "pt3", 00:19:02.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:02.972 "is_configured": true, 00:19:02.972 "data_offset": 2048, 00:19:02.972 "data_size": 63488 00:19:02.972 }, 00:19:02.972 { 00:19:02.972 "name": "pt4", 00:19:02.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:02.972 "is_configured": true, 00:19:02.972 "data_offset": 2048, 00:19:02.972 "data_size": 63488 00:19:02.972 } 00:19:02.972 ] 00:19:02.972 }' 00:19:02.972 08:52:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.972 08:52:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.232 [2024-11-20 08:52:34.068274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.232 "name": "raid_bdev1", 00:19:03.232 "aliases": [ 00:19:03.232 "c020da15-7c34-40b8-998c-1d1b4b7fc119" 00:19:03.232 ], 00:19:03.232 "product_name": "Raid Volume", 00:19:03.232 "block_size": 512, 00:19:03.232 "num_blocks": 190464, 00:19:03.232 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:03.232 "assigned_rate_limits": { 00:19:03.232 "rw_ios_per_sec": 0, 00:19:03.232 "rw_mbytes_per_sec": 0, 00:19:03.232 "r_mbytes_per_sec": 0, 00:19:03.232 "w_mbytes_per_sec": 0 00:19:03.232 }, 00:19:03.232 "claimed": false, 00:19:03.232 "zoned": false, 00:19:03.232 "supported_io_types": { 00:19:03.232 "read": true, 00:19:03.232 "write": true, 00:19:03.232 "unmap": false, 00:19:03.232 "flush": false, 00:19:03.232 "reset": true, 00:19:03.232 "nvme_admin": false, 00:19:03.232 "nvme_io": false, 00:19:03.232 "nvme_io_md": false, 00:19:03.232 "write_zeroes": true, 00:19:03.232 "zcopy": false, 00:19:03.232 "get_zone_info": false, 00:19:03.232 "zone_management": false, 00:19:03.232 "zone_append": false, 00:19:03.232 "compare": false, 00:19:03.232 "compare_and_write": false, 00:19:03.232 "abort": false, 00:19:03.232 "seek_hole": false, 00:19:03.232 "seek_data": false, 00:19:03.232 "copy": false, 00:19:03.232 "nvme_iov_md": false 00:19:03.232 }, 00:19:03.232 "driver_specific": { 00:19:03.232 "raid": { 00:19:03.232 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:03.232 "strip_size_kb": 64, 00:19:03.232 "state": "online", 00:19:03.232 "raid_level": "raid5f", 00:19:03.232 "superblock": true, 00:19:03.232 "num_base_bdevs": 4, 00:19:03.232 "num_base_bdevs_discovered": 4, 00:19:03.232 "num_base_bdevs_operational": 4, 00:19:03.232 "base_bdevs_list": [ 00:19:03.232 { 00:19:03.232 "name": "pt1", 00:19:03.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.232 "is_configured": true, 00:19:03.232 "data_offset": 2048, 00:19:03.232 "data_size": 63488 00:19:03.232 }, 00:19:03.232 { 00:19:03.232 "name": "pt2", 00:19:03.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.232 "is_configured": true, 00:19:03.232 "data_offset": 2048, 00:19:03.232 "data_size": 63488 00:19:03.232 }, 00:19:03.232 { 00:19:03.232 "name": "pt3", 00:19:03.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.232 "is_configured": true, 00:19:03.232 "data_offset": 2048, 00:19:03.232 "data_size": 63488 00:19:03.232 }, 00:19:03.232 { 00:19:03.232 "name": "pt4", 00:19:03.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.232 "is_configured": true, 00:19:03.232 "data_offset": 2048, 00:19:03.232 "data_size": 63488 00:19:03.232 } 00:19:03.232 ] 00:19:03.232 } 00:19:03.232 } 00:19:03.232 }' 00:19:03.232 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:03.492 pt2 00:19:03.492 pt3 00:19:03.492 pt4' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.492 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.751 [2024-11-20 08:52:34.420301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c020da15-7c34-40b8-998c-1d1b4b7fc119 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c020da15-7c34-40b8-998c-1d1b4b7fc119 ']' 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.751 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.751 [2024-11-20 08:52:34.468043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.751 [2024-11-20 08:52:34.468076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.751 [2024-11-20 08:52:34.468213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.751 [2024-11-20 08:52:34.468338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.752 [2024-11-20 08:52:34.468365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 [2024-11-20 08:52:34.624138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:03.752 [2024-11-20 08:52:34.626609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:03.752 [2024-11-20 08:52:34.626680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:03.752 [2024-11-20 08:52:34.626735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:03.752 [2024-11-20 08:52:34.626814] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:03.752 [2024-11-20 08:52:34.626897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:03.752 [2024-11-20 08:52:34.626932] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:03.752 [2024-11-20 08:52:34.626964] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:03.752 [2024-11-20 08:52:34.626988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.752 [2024-11-20 08:52:34.627005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:03.752 request: 00:19:03.752 { 00:19:03.752 "name": "raid_bdev1", 00:19:03.752 "raid_level": "raid5f", 00:19:03.752 "base_bdevs": [ 00:19:03.752 "malloc1", 00:19:03.752 "malloc2", 00:19:03.752 "malloc3", 00:19:03.752 "malloc4" 00:19:03.752 ], 00:19:03.752 "strip_size_kb": 64, 00:19:03.752 "superblock": false, 00:19:03.752 "method": "bdev_raid_create", 00:19:03.752 "req_id": 1 00:19:03.752 } 00:19:03.752 Got JSON-RPC error response 00:19:03.752 response: 00:19:03.752 { 00:19:03.752 "code": -17, 00:19:03.752 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:03.752 } 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.012 [2024-11-20 08:52:34.708111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.012 [2024-11-20 08:52:34.708363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.012 [2024-11-20 08:52:34.708500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:04.012 [2024-11-20 08:52:34.708622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.012 [2024-11-20 08:52:34.711635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.012 [2024-11-20 08:52:34.711804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.012 [2024-11-20 08:52:34.712022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:04.012 [2024-11-20 08:52:34.712224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.012 pt1 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.012 "name": "raid_bdev1", 00:19:04.012 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:04.012 "strip_size_kb": 64, 00:19:04.012 "state": "configuring", 00:19:04.012 "raid_level": "raid5f", 00:19:04.012 "superblock": true, 00:19:04.012 "num_base_bdevs": 4, 00:19:04.012 "num_base_bdevs_discovered": 1, 00:19:04.012 "num_base_bdevs_operational": 4, 00:19:04.012 "base_bdevs_list": [ 00:19:04.012 { 00:19:04.012 "name": "pt1", 00:19:04.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.012 "is_configured": true, 00:19:04.012 "data_offset": 2048, 00:19:04.012 "data_size": 63488 00:19:04.012 }, 00:19:04.012 { 00:19:04.012 "name": null, 00:19:04.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.012 "is_configured": false, 00:19:04.012 "data_offset": 2048, 00:19:04.012 "data_size": 63488 00:19:04.012 }, 00:19:04.012 { 00:19:04.012 "name": null, 00:19:04.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.012 "is_configured": false, 00:19:04.012 "data_offset": 2048, 00:19:04.012 "data_size": 63488 00:19:04.012 }, 00:19:04.012 { 00:19:04.012 "name": null, 00:19:04.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.012 "is_configured": false, 00:19:04.012 "data_offset": 2048, 00:19:04.012 "data_size": 63488 00:19:04.012 } 00:19:04.012 ] 00:19:04.012 }' 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.012 08:52:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.580 [2024-11-20 08:52:35.200285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.580 [2024-11-20 08:52:35.200380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.580 [2024-11-20 08:52:35.200409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:04.580 [2024-11-20 08:52:35.200427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.580 [2024-11-20 08:52:35.200975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.580 [2024-11-20 08:52:35.201014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.580 [2024-11-20 08:52:35.201113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:04.580 [2024-11-20 08:52:35.201169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.580 pt2 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.580 [2024-11-20 08:52:35.208273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.580 "name": "raid_bdev1", 00:19:04.580 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:04.580 "strip_size_kb": 64, 00:19:04.580 "state": "configuring", 00:19:04.580 "raid_level": "raid5f", 00:19:04.580 "superblock": true, 00:19:04.580 "num_base_bdevs": 4, 00:19:04.580 "num_base_bdevs_discovered": 1, 00:19:04.580 "num_base_bdevs_operational": 4, 00:19:04.580 "base_bdevs_list": [ 00:19:04.580 { 00:19:04.580 "name": "pt1", 00:19:04.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.580 "is_configured": true, 00:19:04.580 "data_offset": 2048, 00:19:04.580 "data_size": 63488 00:19:04.580 }, 00:19:04.580 { 00:19:04.580 "name": null, 00:19:04.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.580 "is_configured": false, 00:19:04.580 "data_offset": 0, 00:19:04.580 "data_size": 63488 00:19:04.580 }, 00:19:04.580 { 00:19:04.580 "name": null, 00:19:04.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:04.580 "is_configured": false, 00:19:04.580 "data_offset": 2048, 00:19:04.580 "data_size": 63488 00:19:04.580 }, 00:19:04.580 { 00:19:04.580 "name": null, 00:19:04.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:04.580 "is_configured": false, 00:19:04.580 "data_offset": 2048, 00:19:04.580 "data_size": 63488 00:19:04.580 } 00:19:04.580 ] 00:19:04.580 }' 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.580 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.839 [2024-11-20 08:52:35.744414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.839 [2024-11-20 08:52:35.744492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.839 [2024-11-20 08:52:35.744523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:04.839 [2024-11-20 08:52:35.744538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.839 [2024-11-20 08:52:35.745110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.839 [2024-11-20 08:52:35.745135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.839 [2024-11-20 08:52:35.745270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:04.839 [2024-11-20 08:52:35.745303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.839 pt2 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.839 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.100 [2024-11-20 08:52:35.756385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:05.100 [2024-11-20 08:52:35.756445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.100 [2024-11-20 08:52:35.756472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:05.100 [2024-11-20 08:52:35.756486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.100 [2024-11-20 08:52:35.756931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.100 [2024-11-20 08:52:35.756962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:05.100 [2024-11-20 08:52:35.757039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:05.100 [2024-11-20 08:52:35.757066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:05.100 pt3 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.100 [2024-11-20 08:52:35.764372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:05.100 [2024-11-20 08:52:35.764433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.100 [2024-11-20 08:52:35.764461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:05.100 [2024-11-20 08:52:35.764475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.100 [2024-11-20 08:52:35.764931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.100 [2024-11-20 08:52:35.764962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:05.100 [2024-11-20 08:52:35.765058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:05.100 [2024-11-20 08:52:35.765086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:05.100 [2024-11-20 08:52:35.765276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.100 [2024-11-20 08:52:35.765293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:05.100 [2024-11-20 08:52:35.765602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:05.100 [2024-11-20 08:52:35.772073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.100 [2024-11-20 08:52:35.772289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:05.100 [2024-11-20 08:52:35.772543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.100 pt4 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.100 "name": "raid_bdev1", 00:19:05.100 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:05.100 "strip_size_kb": 64, 00:19:05.100 "state": "online", 00:19:05.100 "raid_level": "raid5f", 00:19:05.100 "superblock": true, 00:19:05.100 "num_base_bdevs": 4, 00:19:05.100 "num_base_bdevs_discovered": 4, 00:19:05.100 "num_base_bdevs_operational": 4, 00:19:05.100 "base_bdevs_list": [ 00:19:05.100 { 00:19:05.100 "name": "pt1", 00:19:05.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.100 "is_configured": true, 00:19:05.100 "data_offset": 2048, 00:19:05.100 "data_size": 63488 00:19:05.100 }, 00:19:05.100 { 00:19:05.100 "name": "pt2", 00:19:05.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.100 "is_configured": true, 00:19:05.100 "data_offset": 2048, 00:19:05.100 "data_size": 63488 00:19:05.100 }, 00:19:05.100 { 00:19:05.100 "name": "pt3", 00:19:05.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.100 "is_configured": true, 00:19:05.100 "data_offset": 2048, 00:19:05.100 "data_size": 63488 00:19:05.100 }, 00:19:05.100 { 00:19:05.100 "name": "pt4", 00:19:05.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.100 "is_configured": true, 00:19:05.100 "data_offset": 2048, 00:19:05.100 "data_size": 63488 00:19:05.100 } 00:19:05.100 ] 00:19:05.100 }' 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.100 08:52:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.669 [2024-11-20 08:52:36.292287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.669 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.669 "name": "raid_bdev1", 00:19:05.669 "aliases": [ 00:19:05.669 "c020da15-7c34-40b8-998c-1d1b4b7fc119" 00:19:05.669 ], 00:19:05.669 "product_name": "Raid Volume", 00:19:05.669 "block_size": 512, 00:19:05.669 "num_blocks": 190464, 00:19:05.669 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:05.669 "assigned_rate_limits": { 00:19:05.669 "rw_ios_per_sec": 0, 00:19:05.669 "rw_mbytes_per_sec": 0, 00:19:05.669 "r_mbytes_per_sec": 0, 00:19:05.669 "w_mbytes_per_sec": 0 00:19:05.669 }, 00:19:05.669 "claimed": false, 00:19:05.669 "zoned": false, 00:19:05.669 "supported_io_types": { 00:19:05.669 "read": true, 00:19:05.669 "write": true, 00:19:05.669 "unmap": false, 00:19:05.669 "flush": false, 00:19:05.669 "reset": true, 00:19:05.669 "nvme_admin": false, 00:19:05.669 "nvme_io": false, 00:19:05.669 "nvme_io_md": false, 00:19:05.669 "write_zeroes": true, 00:19:05.669 "zcopy": false, 00:19:05.669 "get_zone_info": false, 00:19:05.669 "zone_management": false, 00:19:05.669 "zone_append": false, 00:19:05.669 "compare": false, 00:19:05.669 "compare_and_write": false, 00:19:05.669 "abort": false, 00:19:05.669 "seek_hole": false, 00:19:05.669 "seek_data": false, 00:19:05.669 "copy": false, 00:19:05.669 "nvme_iov_md": false 00:19:05.669 }, 00:19:05.669 "driver_specific": { 00:19:05.669 "raid": { 00:19:05.669 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:05.669 "strip_size_kb": 64, 00:19:05.669 "state": "online", 00:19:05.670 "raid_level": "raid5f", 00:19:05.670 "superblock": true, 00:19:05.670 "num_base_bdevs": 4, 00:19:05.670 "num_base_bdevs_discovered": 4, 00:19:05.670 "num_base_bdevs_operational": 4, 00:19:05.670 "base_bdevs_list": [ 00:19:05.670 { 00:19:05.670 "name": "pt1", 00:19:05.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.670 "is_configured": true, 00:19:05.670 "data_offset": 2048, 00:19:05.670 "data_size": 63488 00:19:05.670 }, 00:19:05.670 { 00:19:05.670 "name": "pt2", 00:19:05.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.670 "is_configured": true, 00:19:05.670 "data_offset": 2048, 00:19:05.670 "data_size": 63488 00:19:05.670 }, 00:19:05.670 { 00:19:05.670 "name": "pt3", 00:19:05.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.670 "is_configured": true, 00:19:05.670 "data_offset": 2048, 00:19:05.670 "data_size": 63488 00:19:05.670 }, 00:19:05.670 { 00:19:05.670 "name": "pt4", 00:19:05.670 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.670 "is_configured": true, 00:19:05.670 "data_offset": 2048, 00:19:05.670 "data_size": 63488 00:19:05.670 } 00:19:05.670 ] 00:19:05.670 } 00:19:05.670 } 00:19:05.670 }' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:05.670 pt2 00:19:05.670 pt3 00:19:05.670 pt4' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.670 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:05.929 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 [2024-11-20 08:52:36.644279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c020da15-7c34-40b8-998c-1d1b4b7fc119 '!=' c020da15-7c34-40b8-998c-1d1b4b7fc119 ']' 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 [2024-11-20 08:52:36.692121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.930 "name": "raid_bdev1", 00:19:05.930 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:05.930 "strip_size_kb": 64, 00:19:05.930 "state": "online", 00:19:05.930 "raid_level": "raid5f", 00:19:05.930 "superblock": true, 00:19:05.930 "num_base_bdevs": 4, 00:19:05.930 "num_base_bdevs_discovered": 3, 00:19:05.930 "num_base_bdevs_operational": 3, 00:19:05.930 "base_bdevs_list": [ 00:19:05.930 { 00:19:05.930 "name": null, 00:19:05.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.930 "is_configured": false, 00:19:05.930 "data_offset": 0, 00:19:05.930 "data_size": 63488 00:19:05.930 }, 00:19:05.930 { 00:19:05.930 "name": "pt2", 00:19:05.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.930 "is_configured": true, 00:19:05.930 "data_offset": 2048, 00:19:05.930 "data_size": 63488 00:19:05.930 }, 00:19:05.930 { 00:19:05.930 "name": "pt3", 00:19:05.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.930 "is_configured": true, 00:19:05.930 "data_offset": 2048, 00:19:05.930 "data_size": 63488 00:19:05.930 }, 00:19:05.930 { 00:19:05.930 "name": "pt4", 00:19:05.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.930 "is_configured": true, 00:19:05.930 "data_offset": 2048, 00:19:05.930 "data_size": 63488 00:19:05.930 } 00:19:05.930 ] 00:19:05.930 }' 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.930 08:52:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.498 [2024-11-20 08:52:37.200228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.498 [2024-11-20 08:52:37.200279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.498 [2024-11-20 08:52:37.200385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.498 [2024-11-20 08:52:37.200485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.498 [2024-11-20 08:52:37.200502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:06.498 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.499 [2024-11-20 08:52:37.292230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:06.499 [2024-11-20 08:52:37.292325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.499 [2024-11-20 08:52:37.292356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:06.499 [2024-11-20 08:52:37.292370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.499 [2024-11-20 08:52:37.295248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.499 [2024-11-20 08:52:37.295428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:06.499 [2024-11-20 08:52:37.295549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:06.499 [2024-11-20 08:52:37.295612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.499 pt2 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.499 "name": "raid_bdev1", 00:19:06.499 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:06.499 "strip_size_kb": 64, 00:19:06.499 "state": "configuring", 00:19:06.499 "raid_level": "raid5f", 00:19:06.499 "superblock": true, 00:19:06.499 "num_base_bdevs": 4, 00:19:06.499 "num_base_bdevs_discovered": 1, 00:19:06.499 "num_base_bdevs_operational": 3, 00:19:06.499 "base_bdevs_list": [ 00:19:06.499 { 00:19:06.499 "name": null, 00:19:06.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.499 "is_configured": false, 00:19:06.499 "data_offset": 2048, 00:19:06.499 "data_size": 63488 00:19:06.499 }, 00:19:06.499 { 00:19:06.499 "name": "pt2", 00:19:06.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.499 "is_configured": true, 00:19:06.499 "data_offset": 2048, 00:19:06.499 "data_size": 63488 00:19:06.499 }, 00:19:06.499 { 00:19:06.499 "name": null, 00:19:06.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:06.499 "is_configured": false, 00:19:06.499 "data_offset": 2048, 00:19:06.499 "data_size": 63488 00:19:06.499 }, 00:19:06.499 { 00:19:06.499 "name": null, 00:19:06.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:06.499 "is_configured": false, 00:19:06.499 "data_offset": 2048, 00:19:06.499 "data_size": 63488 00:19:06.499 } 00:19:06.499 ] 00:19:06.499 }' 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.499 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.067 [2024-11-20 08:52:37.792411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:07.067 [2024-11-20 08:52:37.792489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.067 [2024-11-20 08:52:37.792521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:07.067 [2024-11-20 08:52:37.792537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.067 [2024-11-20 08:52:37.793092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.067 [2024-11-20 08:52:37.793118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:07.067 [2024-11-20 08:52:37.793249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:07.067 [2024-11-20 08:52:37.793289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:07.067 pt3 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.067 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.068 "name": "raid_bdev1", 00:19:07.068 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:07.068 "strip_size_kb": 64, 00:19:07.068 "state": "configuring", 00:19:07.068 "raid_level": "raid5f", 00:19:07.068 "superblock": true, 00:19:07.068 "num_base_bdevs": 4, 00:19:07.068 "num_base_bdevs_discovered": 2, 00:19:07.068 "num_base_bdevs_operational": 3, 00:19:07.068 "base_bdevs_list": [ 00:19:07.068 { 00:19:07.068 "name": null, 00:19:07.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.068 "is_configured": false, 00:19:07.068 "data_offset": 2048, 00:19:07.068 "data_size": 63488 00:19:07.068 }, 00:19:07.068 { 00:19:07.068 "name": "pt2", 00:19:07.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.068 "is_configured": true, 00:19:07.068 "data_offset": 2048, 00:19:07.068 "data_size": 63488 00:19:07.068 }, 00:19:07.068 { 00:19:07.068 "name": "pt3", 00:19:07.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.068 "is_configured": true, 00:19:07.068 "data_offset": 2048, 00:19:07.068 "data_size": 63488 00:19:07.068 }, 00:19:07.068 { 00:19:07.068 "name": null, 00:19:07.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.068 "is_configured": false, 00:19:07.068 "data_offset": 2048, 00:19:07.068 "data_size": 63488 00:19:07.068 } 00:19:07.068 ] 00:19:07.068 }' 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.068 08:52:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.638 [2024-11-20 08:52:38.264535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:07.638 [2024-11-20 08:52:38.264613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.638 [2024-11-20 08:52:38.264646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:07.638 [2024-11-20 08:52:38.264662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.638 [2024-11-20 08:52:38.265241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.638 [2024-11-20 08:52:38.265423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:07.638 [2024-11-20 08:52:38.265554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:07.638 [2024-11-20 08:52:38.265590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:07.638 [2024-11-20 08:52:38.265761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:07.638 [2024-11-20 08:52:38.265778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:07.638 [2024-11-20 08:52:38.266084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:07.638 [2024-11-20 08:52:38.272470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:07.638 [2024-11-20 08:52:38.272640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:07.638 [2024-11-20 08:52:38.273007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.638 pt4 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.638 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.638 "name": "raid_bdev1", 00:19:07.638 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:07.638 "strip_size_kb": 64, 00:19:07.638 "state": "online", 00:19:07.638 "raid_level": "raid5f", 00:19:07.638 "superblock": true, 00:19:07.638 "num_base_bdevs": 4, 00:19:07.638 "num_base_bdevs_discovered": 3, 00:19:07.638 "num_base_bdevs_operational": 3, 00:19:07.638 "base_bdevs_list": [ 00:19:07.638 { 00:19:07.639 "name": null, 00:19:07.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.639 "is_configured": false, 00:19:07.639 "data_offset": 2048, 00:19:07.639 "data_size": 63488 00:19:07.639 }, 00:19:07.639 { 00:19:07.639 "name": "pt2", 00:19:07.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.639 "is_configured": true, 00:19:07.639 "data_offset": 2048, 00:19:07.639 "data_size": 63488 00:19:07.639 }, 00:19:07.639 { 00:19:07.639 "name": "pt3", 00:19:07.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:07.639 "is_configured": true, 00:19:07.639 "data_offset": 2048, 00:19:07.639 "data_size": 63488 00:19:07.639 }, 00:19:07.639 { 00:19:07.639 "name": "pt4", 00:19:07.639 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:07.639 "is_configured": true, 00:19:07.639 "data_offset": 2048, 00:19:07.639 "data_size": 63488 00:19:07.639 } 00:19:07.639 ] 00:19:07.639 }' 00:19:07.639 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.639 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.898 [2024-11-20 08:52:38.776446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.898 [2024-11-20 08:52:38.776615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.898 [2024-11-20 08:52:38.776836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.898 [2024-11-20 08:52:38.777082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.898 [2024-11-20 08:52:38.777256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.898 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:08.157 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.158 [2024-11-20 08:52:38.848430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:08.158 [2024-11-20 08:52:38.848515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.158 [2024-11-20 08:52:38.848549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:08.158 [2024-11-20 08:52:38.848571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.158 [2024-11-20 08:52:38.851518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.158 [2024-11-20 08:52:38.851601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:08.158 [2024-11-20 08:52:38.851715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:08.158 [2024-11-20 08:52:38.851784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:08.158 [2024-11-20 08:52:38.851944] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:08.158 [2024-11-20 08:52:38.851967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:08.158 [2024-11-20 08:52:38.851987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:08.158 [2024-11-20 08:52:38.852056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.158 [2024-11-20 08:52:38.852233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:08.158 pt1 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.158 "name": "raid_bdev1", 00:19:08.158 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:08.158 "strip_size_kb": 64, 00:19:08.158 "state": "configuring", 00:19:08.158 "raid_level": "raid5f", 00:19:08.158 "superblock": true, 00:19:08.158 "num_base_bdevs": 4, 00:19:08.158 "num_base_bdevs_discovered": 2, 00:19:08.158 "num_base_bdevs_operational": 3, 00:19:08.158 "base_bdevs_list": [ 00:19:08.158 { 00:19:08.158 "name": null, 00:19:08.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.158 "is_configured": false, 00:19:08.158 "data_offset": 2048, 00:19:08.158 "data_size": 63488 00:19:08.158 }, 00:19:08.158 { 00:19:08.158 "name": "pt2", 00:19:08.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.158 "is_configured": true, 00:19:08.158 "data_offset": 2048, 00:19:08.158 "data_size": 63488 00:19:08.158 }, 00:19:08.158 { 00:19:08.158 "name": "pt3", 00:19:08.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.158 "is_configured": true, 00:19:08.158 "data_offset": 2048, 00:19:08.158 "data_size": 63488 00:19:08.158 }, 00:19:08.158 { 00:19:08.158 "name": null, 00:19:08.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.158 "is_configured": false, 00:19:08.158 "data_offset": 2048, 00:19:08.158 "data_size": 63488 00:19:08.158 } 00:19:08.158 ] 00:19:08.158 }' 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.158 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.725 [2024-11-20 08:52:39.404627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:08.725 [2024-11-20 08:52:39.404705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.725 [2024-11-20 08:52:39.404742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:08.725 [2024-11-20 08:52:39.404757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.725 [2024-11-20 08:52:39.405315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.725 [2024-11-20 08:52:39.405348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:08.725 [2024-11-20 08:52:39.405455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:08.725 [2024-11-20 08:52:39.405496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:08.725 [2024-11-20 08:52:39.405677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:08.725 [2024-11-20 08:52:39.405700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:08.725 [2024-11-20 08:52:39.406015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:08.725 [2024-11-20 08:52:39.412467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:08.725 [2024-11-20 08:52:39.412499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:08.725 [2024-11-20 08:52:39.412837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.725 pt4 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.725 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.726 "name": "raid_bdev1", 00:19:08.726 "uuid": "c020da15-7c34-40b8-998c-1d1b4b7fc119", 00:19:08.726 "strip_size_kb": 64, 00:19:08.726 "state": "online", 00:19:08.726 "raid_level": "raid5f", 00:19:08.726 "superblock": true, 00:19:08.726 "num_base_bdevs": 4, 00:19:08.726 "num_base_bdevs_discovered": 3, 00:19:08.726 "num_base_bdevs_operational": 3, 00:19:08.726 "base_bdevs_list": [ 00:19:08.726 { 00:19:08.726 "name": null, 00:19:08.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.726 "is_configured": false, 00:19:08.726 "data_offset": 2048, 00:19:08.726 "data_size": 63488 00:19:08.726 }, 00:19:08.726 { 00:19:08.726 "name": "pt2", 00:19:08.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:08.726 "is_configured": true, 00:19:08.726 "data_offset": 2048, 00:19:08.726 "data_size": 63488 00:19:08.726 }, 00:19:08.726 { 00:19:08.726 "name": "pt3", 00:19:08.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:08.726 "is_configured": true, 00:19:08.726 "data_offset": 2048, 00:19:08.726 "data_size": 63488 00:19:08.726 }, 00:19:08.726 { 00:19:08.726 "name": "pt4", 00:19:08.726 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:08.726 "is_configured": true, 00:19:08.726 "data_offset": 2048, 00:19:08.726 "data_size": 63488 00:19:08.726 } 00:19:08.726 ] 00:19:08.726 }' 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.726 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.293 [2024-11-20 08:52:39.956499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c020da15-7c34-40b8-998c-1d1b4b7fc119 '!=' c020da15-7c34-40b8-998c-1d1b4b7fc119 ']' 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84518 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84518 ']' 00:19:09.293 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84518 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84518 00:19:09.293 killing process with pid 84518 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84518' 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84518 00:19:09.293 [2024-11-20 08:52:40.029989] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.293 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84518 00:19:09.293 [2024-11-20 08:52:40.030103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.293 [2024-11-20 08:52:40.030215] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.293 [2024-11-20 08:52:40.030238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:09.553 [2024-11-20 08:52:40.378363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.491 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:10.491 00:19:10.491 real 0m9.121s 00:19:10.491 user 0m14.935s 00:19:10.491 sys 0m1.309s 00:19:10.491 ************************************ 00:19:10.491 END TEST raid5f_superblock_test 00:19:10.491 ************************************ 00:19:10.491 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.491 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.751 08:52:41 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:10.751 08:52:41 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:10.751 08:52:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:10.751 08:52:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.751 08:52:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.751 ************************************ 00:19:10.751 START TEST raid5f_rebuild_test 00:19:10.751 ************************************ 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85005 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:10.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85005 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85005 ']' 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.751 08:52:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.751 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:10.751 Zero copy mechanism will not be used. 00:19:10.751 [2024-11-20 08:52:41.566449] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:10.751 [2024-11-20 08:52:41.566648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85005 ] 00:19:11.011 [2024-11-20 08:52:41.750972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.011 [2024-11-20 08:52:41.880217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.270 [2024-11-20 08:52:42.082346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.270 [2024-11-20 08:52:42.082410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 BaseBdev1_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 [2024-11-20 08:52:42.596180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.839 [2024-11-20 08:52:42.596311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.839 [2024-11-20 08:52:42.596347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.839 [2024-11-20 08:52:42.596366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.839 [2024-11-20 08:52:42.599311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.839 [2024-11-20 08:52:42.599530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.839 BaseBdev1 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 BaseBdev2_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 [2024-11-20 08:52:42.652122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:11.839 [2024-11-20 08:52:42.652418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.839 [2024-11-20 08:52:42.652463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.839 [2024-11-20 08:52:42.652499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.839 [2024-11-20 08:52:42.655397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.839 [2024-11-20 08:52:42.655603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.839 BaseBdev2 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 BaseBdev3_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 [2024-11-20 08:52:42.717383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:11.839 [2024-11-20 08:52:42.717471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.839 [2024-11-20 08:52:42.717504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:11.839 [2024-11-20 08:52:42.717523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.839 [2024-11-20 08:52:42.720385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.839 [2024-11-20 08:52:42.720589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:11.839 BaseBdev3 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.839 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.098 BaseBdev4_malloc 00:19:12.098 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 [2024-11-20 08:52:42.769347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:12.099 [2024-11-20 08:52:42.769434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.099 [2024-11-20 08:52:42.769462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:12.099 [2024-11-20 08:52:42.769481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.099 [2024-11-20 08:52:42.772477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.099 [2024-11-20 08:52:42.772535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:12.099 BaseBdev4 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 spare_malloc 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 spare_delay 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 [2024-11-20 08:52:42.830095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.099 [2024-11-20 08:52:42.830368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.099 [2024-11-20 08:52:42.830423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:12.099 [2024-11-20 08:52:42.830447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.099 [2024-11-20 08:52:42.833493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.099 [2024-11-20 08:52:42.833725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.099 spare 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 [2024-11-20 08:52:42.838138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.099 [2024-11-20 08:52:42.840588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.099 [2024-11-20 08:52:42.840705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.099 [2024-11-20 08:52:42.840801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:12.099 [2024-11-20 08:52:42.840941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:12.099 [2024-11-20 08:52:42.840964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:12.099 [2024-11-20 08:52:42.841301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:12.099 [2024-11-20 08:52:42.848145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:12.099 [2024-11-20 08:52:42.848355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:12.099 [2024-11-20 08:52:42.848689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.099 "name": "raid_bdev1", 00:19:12.099 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:12.099 "strip_size_kb": 64, 00:19:12.099 "state": "online", 00:19:12.099 "raid_level": "raid5f", 00:19:12.099 "superblock": false, 00:19:12.099 "num_base_bdevs": 4, 00:19:12.099 "num_base_bdevs_discovered": 4, 00:19:12.099 "num_base_bdevs_operational": 4, 00:19:12.099 "base_bdevs_list": [ 00:19:12.099 { 00:19:12.099 "name": "BaseBdev1", 00:19:12.099 "uuid": "d8e41902-c015-5036-ad18-9b83a4b7ba0e", 00:19:12.099 "is_configured": true, 00:19:12.099 "data_offset": 0, 00:19:12.099 "data_size": 65536 00:19:12.099 }, 00:19:12.099 { 00:19:12.099 "name": "BaseBdev2", 00:19:12.099 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:12.099 "is_configured": true, 00:19:12.099 "data_offset": 0, 00:19:12.099 "data_size": 65536 00:19:12.099 }, 00:19:12.099 { 00:19:12.099 "name": "BaseBdev3", 00:19:12.099 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:12.099 "is_configured": true, 00:19:12.099 "data_offset": 0, 00:19:12.099 "data_size": 65536 00:19:12.099 }, 00:19:12.099 { 00:19:12.099 "name": "BaseBdev4", 00:19:12.099 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:12.099 "is_configured": true, 00:19:12.099 "data_offset": 0, 00:19:12.099 "data_size": 65536 00:19:12.099 } 00:19:12.099 ] 00:19:12.099 }' 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.099 08:52:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.666 [2024-11-20 08:52:43.360534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:12.666 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:12.926 [2024-11-20 08:52:43.736403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:12.926 /dev/nbd0 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:12.926 1+0 records in 00:19:12.926 1+0 records out 00:19:12.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322002 s, 12.7 MB/s 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:12.926 08:52:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:13.862 512+0 records in 00:19:13.862 512+0 records out 00:19:13.862 100663296 bytes (101 MB, 96 MiB) copied, 0.619448 s, 163 MB/s 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:13.862 [2024-11-20 08:52:44.689215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.862 [2024-11-20 08:52:44.729570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.862 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.122 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.122 "name": "raid_bdev1", 00:19:14.122 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:14.122 "strip_size_kb": 64, 00:19:14.122 "state": "online", 00:19:14.122 "raid_level": "raid5f", 00:19:14.122 "superblock": false, 00:19:14.122 "num_base_bdevs": 4, 00:19:14.122 "num_base_bdevs_discovered": 3, 00:19:14.122 "num_base_bdevs_operational": 3, 00:19:14.122 "base_bdevs_list": [ 00:19:14.122 { 00:19:14.122 "name": null, 00:19:14.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.122 "is_configured": false, 00:19:14.122 "data_offset": 0, 00:19:14.122 "data_size": 65536 00:19:14.122 }, 00:19:14.122 { 00:19:14.122 "name": "BaseBdev2", 00:19:14.122 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:14.122 "is_configured": true, 00:19:14.122 "data_offset": 0, 00:19:14.122 "data_size": 65536 00:19:14.122 }, 00:19:14.122 { 00:19:14.122 "name": "BaseBdev3", 00:19:14.122 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:14.122 "is_configured": true, 00:19:14.122 "data_offset": 0, 00:19:14.122 "data_size": 65536 00:19:14.122 }, 00:19:14.122 { 00:19:14.122 "name": "BaseBdev4", 00:19:14.122 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:14.122 "is_configured": true, 00:19:14.122 "data_offset": 0, 00:19:14.122 "data_size": 65536 00:19:14.122 } 00:19:14.122 ] 00:19:14.122 }' 00:19:14.122 08:52:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.122 08:52:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.381 08:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:14.381 08:52:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.381 08:52:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.381 [2024-11-20 08:52:45.245708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.381 [2024-11-20 08:52:45.260542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:14.381 08:52:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.381 08:52:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:14.381 [2024-11-20 08:52:45.269868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.759 "name": "raid_bdev1", 00:19:15.759 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:15.759 "strip_size_kb": 64, 00:19:15.759 "state": "online", 00:19:15.759 "raid_level": "raid5f", 00:19:15.759 "superblock": false, 00:19:15.759 "num_base_bdevs": 4, 00:19:15.759 "num_base_bdevs_discovered": 4, 00:19:15.759 "num_base_bdevs_operational": 4, 00:19:15.759 "process": { 00:19:15.759 "type": "rebuild", 00:19:15.759 "target": "spare", 00:19:15.759 "progress": { 00:19:15.759 "blocks": 17280, 00:19:15.759 "percent": 8 00:19:15.759 } 00:19:15.759 }, 00:19:15.759 "base_bdevs_list": [ 00:19:15.759 { 00:19:15.759 "name": "spare", 00:19:15.759 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 }, 00:19:15.759 { 00:19:15.759 "name": "BaseBdev2", 00:19:15.759 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 }, 00:19:15.759 { 00:19:15.759 "name": "BaseBdev3", 00:19:15.759 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 }, 00:19:15.759 { 00:19:15.759 "name": "BaseBdev4", 00:19:15.759 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 } 00:19:15.759 ] 00:19:15.759 }' 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.759 [2024-11-20 08:52:46.427525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.759 [2024-11-20 08:52:46.482480] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:15.759 [2024-11-20 08:52:46.482586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.759 [2024-11-20 08:52:46.482613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.759 [2024-11-20 08:52:46.482628] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.759 "name": "raid_bdev1", 00:19:15.759 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:15.759 "strip_size_kb": 64, 00:19:15.759 "state": "online", 00:19:15.759 "raid_level": "raid5f", 00:19:15.759 "superblock": false, 00:19:15.759 "num_base_bdevs": 4, 00:19:15.759 "num_base_bdevs_discovered": 3, 00:19:15.759 "num_base_bdevs_operational": 3, 00:19:15.759 "base_bdevs_list": [ 00:19:15.759 { 00:19:15.759 "name": null, 00:19:15.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.759 "is_configured": false, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 }, 00:19:15.759 { 00:19:15.759 "name": "BaseBdev2", 00:19:15.759 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 }, 00:19:15.759 { 00:19:15.759 "name": "BaseBdev3", 00:19:15.759 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 }, 00:19:15.759 { 00:19:15.759 "name": "BaseBdev4", 00:19:15.759 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:15.759 "is_configured": true, 00:19:15.759 "data_offset": 0, 00:19:15.759 "data_size": 65536 00:19:15.759 } 00:19:15.759 ] 00:19:15.759 }' 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.759 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.375 "name": "raid_bdev1", 00:19:16.375 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:16.375 "strip_size_kb": 64, 00:19:16.375 "state": "online", 00:19:16.375 "raid_level": "raid5f", 00:19:16.375 "superblock": false, 00:19:16.375 "num_base_bdevs": 4, 00:19:16.375 "num_base_bdevs_discovered": 3, 00:19:16.375 "num_base_bdevs_operational": 3, 00:19:16.375 "base_bdevs_list": [ 00:19:16.375 { 00:19:16.375 "name": null, 00:19:16.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.375 "is_configured": false, 00:19:16.375 "data_offset": 0, 00:19:16.375 "data_size": 65536 00:19:16.375 }, 00:19:16.375 { 00:19:16.375 "name": "BaseBdev2", 00:19:16.375 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:16.375 "is_configured": true, 00:19:16.375 "data_offset": 0, 00:19:16.375 "data_size": 65536 00:19:16.375 }, 00:19:16.375 { 00:19:16.375 "name": "BaseBdev3", 00:19:16.375 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:16.375 "is_configured": true, 00:19:16.375 "data_offset": 0, 00:19:16.375 "data_size": 65536 00:19:16.375 }, 00:19:16.375 { 00:19:16.375 "name": "BaseBdev4", 00:19:16.375 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:16.375 "is_configured": true, 00:19:16.375 "data_offset": 0, 00:19:16.375 "data_size": 65536 00:19:16.375 } 00:19:16.375 ] 00:19:16.375 }' 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.375 [2024-11-20 08:52:47.205822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.375 [2024-11-20 08:52:47.219590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.375 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:16.375 [2024-11-20 08:52:47.228412] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:17.313 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.313 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.313 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.313 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.313 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.572 "name": "raid_bdev1", 00:19:17.572 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:17.572 "strip_size_kb": 64, 00:19:17.572 "state": "online", 00:19:17.572 "raid_level": "raid5f", 00:19:17.572 "superblock": false, 00:19:17.572 "num_base_bdevs": 4, 00:19:17.572 "num_base_bdevs_discovered": 4, 00:19:17.572 "num_base_bdevs_operational": 4, 00:19:17.572 "process": { 00:19:17.572 "type": "rebuild", 00:19:17.572 "target": "spare", 00:19:17.572 "progress": { 00:19:17.572 "blocks": 17280, 00:19:17.572 "percent": 8 00:19:17.572 } 00:19:17.572 }, 00:19:17.572 "base_bdevs_list": [ 00:19:17.572 { 00:19:17.572 "name": "spare", 00:19:17.572 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:17.572 "is_configured": true, 00:19:17.572 "data_offset": 0, 00:19:17.572 "data_size": 65536 00:19:17.572 }, 00:19:17.572 { 00:19:17.572 "name": "BaseBdev2", 00:19:17.572 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:17.572 "is_configured": true, 00:19:17.572 "data_offset": 0, 00:19:17.572 "data_size": 65536 00:19:17.572 }, 00:19:17.572 { 00:19:17.572 "name": "BaseBdev3", 00:19:17.572 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:17.572 "is_configured": true, 00:19:17.572 "data_offset": 0, 00:19:17.572 "data_size": 65536 00:19:17.572 }, 00:19:17.572 { 00:19:17.572 "name": "BaseBdev4", 00:19:17.572 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:17.572 "is_configured": true, 00:19:17.572 "data_offset": 0, 00:19:17.572 "data_size": 65536 00:19:17.572 } 00:19:17.572 ] 00:19:17.572 }' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.572 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.572 "name": "raid_bdev1", 00:19:17.572 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:17.572 "strip_size_kb": 64, 00:19:17.572 "state": "online", 00:19:17.572 "raid_level": "raid5f", 00:19:17.572 "superblock": false, 00:19:17.572 "num_base_bdevs": 4, 00:19:17.572 "num_base_bdevs_discovered": 4, 00:19:17.572 "num_base_bdevs_operational": 4, 00:19:17.572 "process": { 00:19:17.572 "type": "rebuild", 00:19:17.572 "target": "spare", 00:19:17.572 "progress": { 00:19:17.573 "blocks": 21120, 00:19:17.573 "percent": 10 00:19:17.573 } 00:19:17.573 }, 00:19:17.573 "base_bdevs_list": [ 00:19:17.573 { 00:19:17.573 "name": "spare", 00:19:17.573 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:17.573 "is_configured": true, 00:19:17.573 "data_offset": 0, 00:19:17.573 "data_size": 65536 00:19:17.573 }, 00:19:17.573 { 00:19:17.573 "name": "BaseBdev2", 00:19:17.573 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:17.573 "is_configured": true, 00:19:17.573 "data_offset": 0, 00:19:17.573 "data_size": 65536 00:19:17.573 }, 00:19:17.573 { 00:19:17.573 "name": "BaseBdev3", 00:19:17.573 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:17.573 "is_configured": true, 00:19:17.573 "data_offset": 0, 00:19:17.573 "data_size": 65536 00:19:17.573 }, 00:19:17.573 { 00:19:17.573 "name": "BaseBdev4", 00:19:17.573 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:17.573 "is_configured": true, 00:19:17.573 "data_offset": 0, 00:19:17.573 "data_size": 65536 00:19:17.573 } 00:19:17.573 ] 00:19:17.573 }' 00:19:17.573 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.831 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.831 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.831 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.831 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.770 "name": "raid_bdev1", 00:19:18.770 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:18.770 "strip_size_kb": 64, 00:19:18.770 "state": "online", 00:19:18.770 "raid_level": "raid5f", 00:19:18.770 "superblock": false, 00:19:18.770 "num_base_bdevs": 4, 00:19:18.770 "num_base_bdevs_discovered": 4, 00:19:18.770 "num_base_bdevs_operational": 4, 00:19:18.770 "process": { 00:19:18.770 "type": "rebuild", 00:19:18.770 "target": "spare", 00:19:18.770 "progress": { 00:19:18.770 "blocks": 42240, 00:19:18.770 "percent": 21 00:19:18.770 } 00:19:18.770 }, 00:19:18.770 "base_bdevs_list": [ 00:19:18.770 { 00:19:18.770 "name": "spare", 00:19:18.770 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:18.770 "is_configured": true, 00:19:18.770 "data_offset": 0, 00:19:18.770 "data_size": 65536 00:19:18.770 }, 00:19:18.770 { 00:19:18.770 "name": "BaseBdev2", 00:19:18.770 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:18.770 "is_configured": true, 00:19:18.770 "data_offset": 0, 00:19:18.770 "data_size": 65536 00:19:18.770 }, 00:19:18.770 { 00:19:18.770 "name": "BaseBdev3", 00:19:18.770 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:18.770 "is_configured": true, 00:19:18.770 "data_offset": 0, 00:19:18.770 "data_size": 65536 00:19:18.770 }, 00:19:18.770 { 00:19:18.770 "name": "BaseBdev4", 00:19:18.770 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:18.770 "is_configured": true, 00:19:18.770 "data_offset": 0, 00:19:18.770 "data_size": 65536 00:19:18.770 } 00:19:18.770 ] 00:19:18.770 }' 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.770 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.029 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.029 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.965 "name": "raid_bdev1", 00:19:19.965 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:19.965 "strip_size_kb": 64, 00:19:19.965 "state": "online", 00:19:19.965 "raid_level": "raid5f", 00:19:19.965 "superblock": false, 00:19:19.965 "num_base_bdevs": 4, 00:19:19.965 "num_base_bdevs_discovered": 4, 00:19:19.965 "num_base_bdevs_operational": 4, 00:19:19.965 "process": { 00:19:19.965 "type": "rebuild", 00:19:19.965 "target": "spare", 00:19:19.965 "progress": { 00:19:19.965 "blocks": 65280, 00:19:19.965 "percent": 33 00:19:19.965 } 00:19:19.965 }, 00:19:19.965 "base_bdevs_list": [ 00:19:19.965 { 00:19:19.965 "name": "spare", 00:19:19.965 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:19.965 "is_configured": true, 00:19:19.965 "data_offset": 0, 00:19:19.965 "data_size": 65536 00:19:19.965 }, 00:19:19.965 { 00:19:19.965 "name": "BaseBdev2", 00:19:19.965 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:19.965 "is_configured": true, 00:19:19.965 "data_offset": 0, 00:19:19.965 "data_size": 65536 00:19:19.965 }, 00:19:19.965 { 00:19:19.965 "name": "BaseBdev3", 00:19:19.965 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:19.965 "is_configured": true, 00:19:19.965 "data_offset": 0, 00:19:19.965 "data_size": 65536 00:19:19.965 }, 00:19:19.965 { 00:19:19.965 "name": "BaseBdev4", 00:19:19.965 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:19.965 "is_configured": true, 00:19:19.965 "data_offset": 0, 00:19:19.965 "data_size": 65536 00:19:19.965 } 00:19:19.965 ] 00:19:19.965 }' 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.965 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.342 "name": "raid_bdev1", 00:19:21.342 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:21.342 "strip_size_kb": 64, 00:19:21.342 "state": "online", 00:19:21.342 "raid_level": "raid5f", 00:19:21.342 "superblock": false, 00:19:21.342 "num_base_bdevs": 4, 00:19:21.342 "num_base_bdevs_discovered": 4, 00:19:21.342 "num_base_bdevs_operational": 4, 00:19:21.342 "process": { 00:19:21.342 "type": "rebuild", 00:19:21.342 "target": "spare", 00:19:21.342 "progress": { 00:19:21.342 "blocks": 88320, 00:19:21.342 "percent": 44 00:19:21.342 } 00:19:21.342 }, 00:19:21.342 "base_bdevs_list": [ 00:19:21.342 { 00:19:21.342 "name": "spare", 00:19:21.342 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:21.342 "is_configured": true, 00:19:21.342 "data_offset": 0, 00:19:21.342 "data_size": 65536 00:19:21.342 }, 00:19:21.342 { 00:19:21.342 "name": "BaseBdev2", 00:19:21.342 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:21.342 "is_configured": true, 00:19:21.342 "data_offset": 0, 00:19:21.342 "data_size": 65536 00:19:21.342 }, 00:19:21.342 { 00:19:21.342 "name": "BaseBdev3", 00:19:21.342 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:21.342 "is_configured": true, 00:19:21.342 "data_offset": 0, 00:19:21.342 "data_size": 65536 00:19:21.342 }, 00:19:21.342 { 00:19:21.342 "name": "BaseBdev4", 00:19:21.342 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:21.342 "is_configured": true, 00:19:21.342 "data_offset": 0, 00:19:21.342 "data_size": 65536 00:19:21.342 } 00:19:21.342 ] 00:19:21.342 }' 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.342 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.342 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.342 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.280 "name": "raid_bdev1", 00:19:22.280 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:22.280 "strip_size_kb": 64, 00:19:22.280 "state": "online", 00:19:22.280 "raid_level": "raid5f", 00:19:22.280 "superblock": false, 00:19:22.280 "num_base_bdevs": 4, 00:19:22.280 "num_base_bdevs_discovered": 4, 00:19:22.280 "num_base_bdevs_operational": 4, 00:19:22.280 "process": { 00:19:22.280 "type": "rebuild", 00:19:22.280 "target": "spare", 00:19:22.280 "progress": { 00:19:22.280 "blocks": 109440, 00:19:22.280 "percent": 55 00:19:22.280 } 00:19:22.280 }, 00:19:22.280 "base_bdevs_list": [ 00:19:22.280 { 00:19:22.280 "name": "spare", 00:19:22.280 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:22.280 "is_configured": true, 00:19:22.280 "data_offset": 0, 00:19:22.280 "data_size": 65536 00:19:22.280 }, 00:19:22.280 { 00:19:22.280 "name": "BaseBdev2", 00:19:22.280 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:22.280 "is_configured": true, 00:19:22.280 "data_offset": 0, 00:19:22.280 "data_size": 65536 00:19:22.280 }, 00:19:22.280 { 00:19:22.280 "name": "BaseBdev3", 00:19:22.280 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:22.280 "is_configured": true, 00:19:22.280 "data_offset": 0, 00:19:22.280 "data_size": 65536 00:19:22.280 }, 00:19:22.280 { 00:19:22.280 "name": "BaseBdev4", 00:19:22.280 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:22.280 "is_configured": true, 00:19:22.280 "data_offset": 0, 00:19:22.280 "data_size": 65536 00:19:22.280 } 00:19:22.280 ] 00:19:22.280 }' 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.280 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.656 "name": "raid_bdev1", 00:19:23.656 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:23.656 "strip_size_kb": 64, 00:19:23.656 "state": "online", 00:19:23.656 "raid_level": "raid5f", 00:19:23.656 "superblock": false, 00:19:23.656 "num_base_bdevs": 4, 00:19:23.656 "num_base_bdevs_discovered": 4, 00:19:23.656 "num_base_bdevs_operational": 4, 00:19:23.656 "process": { 00:19:23.656 "type": "rebuild", 00:19:23.656 "target": "spare", 00:19:23.656 "progress": { 00:19:23.656 "blocks": 132480, 00:19:23.656 "percent": 67 00:19:23.656 } 00:19:23.656 }, 00:19:23.656 "base_bdevs_list": [ 00:19:23.656 { 00:19:23.656 "name": "spare", 00:19:23.656 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:23.656 "is_configured": true, 00:19:23.656 "data_offset": 0, 00:19:23.656 "data_size": 65536 00:19:23.656 }, 00:19:23.656 { 00:19:23.656 "name": "BaseBdev2", 00:19:23.656 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:23.656 "is_configured": true, 00:19:23.656 "data_offset": 0, 00:19:23.656 "data_size": 65536 00:19:23.656 }, 00:19:23.656 { 00:19:23.656 "name": "BaseBdev3", 00:19:23.656 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:23.656 "is_configured": true, 00:19:23.656 "data_offset": 0, 00:19:23.656 "data_size": 65536 00:19:23.656 }, 00:19:23.656 { 00:19:23.656 "name": "BaseBdev4", 00:19:23.656 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:23.656 "is_configured": true, 00:19:23.656 "data_offset": 0, 00:19:23.656 "data_size": 65536 00:19:23.656 } 00:19:23.656 ] 00:19:23.656 }' 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.656 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.592 "name": "raid_bdev1", 00:19:24.592 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:24.592 "strip_size_kb": 64, 00:19:24.592 "state": "online", 00:19:24.592 "raid_level": "raid5f", 00:19:24.592 "superblock": false, 00:19:24.592 "num_base_bdevs": 4, 00:19:24.592 "num_base_bdevs_discovered": 4, 00:19:24.592 "num_base_bdevs_operational": 4, 00:19:24.592 "process": { 00:19:24.592 "type": "rebuild", 00:19:24.592 "target": "spare", 00:19:24.592 "progress": { 00:19:24.592 "blocks": 153600, 00:19:24.592 "percent": 78 00:19:24.592 } 00:19:24.592 }, 00:19:24.592 "base_bdevs_list": [ 00:19:24.592 { 00:19:24.592 "name": "spare", 00:19:24.592 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:24.592 "is_configured": true, 00:19:24.592 "data_offset": 0, 00:19:24.592 "data_size": 65536 00:19:24.592 }, 00:19:24.592 { 00:19:24.592 "name": "BaseBdev2", 00:19:24.592 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:24.592 "is_configured": true, 00:19:24.592 "data_offset": 0, 00:19:24.592 "data_size": 65536 00:19:24.592 }, 00:19:24.592 { 00:19:24.592 "name": "BaseBdev3", 00:19:24.592 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:24.592 "is_configured": true, 00:19:24.592 "data_offset": 0, 00:19:24.592 "data_size": 65536 00:19:24.592 }, 00:19:24.592 { 00:19:24.592 "name": "BaseBdev4", 00:19:24.592 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:24.592 "is_configured": true, 00:19:24.592 "data_offset": 0, 00:19:24.592 "data_size": 65536 00:19:24.592 } 00:19:24.592 ] 00:19:24.592 }' 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.592 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.850 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.850 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.792 "name": "raid_bdev1", 00:19:25.792 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:25.792 "strip_size_kb": 64, 00:19:25.792 "state": "online", 00:19:25.792 "raid_level": "raid5f", 00:19:25.792 "superblock": false, 00:19:25.792 "num_base_bdevs": 4, 00:19:25.792 "num_base_bdevs_discovered": 4, 00:19:25.792 "num_base_bdevs_operational": 4, 00:19:25.792 "process": { 00:19:25.792 "type": "rebuild", 00:19:25.792 "target": "spare", 00:19:25.792 "progress": { 00:19:25.792 "blocks": 176640, 00:19:25.792 "percent": 89 00:19:25.792 } 00:19:25.792 }, 00:19:25.792 "base_bdevs_list": [ 00:19:25.792 { 00:19:25.792 "name": "spare", 00:19:25.792 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:25.792 "is_configured": true, 00:19:25.792 "data_offset": 0, 00:19:25.792 "data_size": 65536 00:19:25.792 }, 00:19:25.792 { 00:19:25.792 "name": "BaseBdev2", 00:19:25.792 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:25.792 "is_configured": true, 00:19:25.792 "data_offset": 0, 00:19:25.792 "data_size": 65536 00:19:25.792 }, 00:19:25.792 { 00:19:25.792 "name": "BaseBdev3", 00:19:25.792 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:25.792 "is_configured": true, 00:19:25.792 "data_offset": 0, 00:19:25.792 "data_size": 65536 00:19:25.792 }, 00:19:25.792 { 00:19:25.792 "name": "BaseBdev4", 00:19:25.792 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:25.792 "is_configured": true, 00:19:25.792 "data_offset": 0, 00:19:25.792 "data_size": 65536 00:19:25.792 } 00:19:25.792 ] 00:19:25.792 }' 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.792 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:26.769 [2024-11-20 08:52:57.624763] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:26.770 [2024-11-20 08:52:57.624857] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:26.770 [2024-11-20 08:52:57.624935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.770 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.770 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.770 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.770 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.770 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.770 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.029 "name": "raid_bdev1", 00:19:27.029 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:27.029 "strip_size_kb": 64, 00:19:27.029 "state": "online", 00:19:27.029 "raid_level": "raid5f", 00:19:27.029 "superblock": false, 00:19:27.029 "num_base_bdevs": 4, 00:19:27.029 "num_base_bdevs_discovered": 4, 00:19:27.029 "num_base_bdevs_operational": 4, 00:19:27.029 "base_bdevs_list": [ 00:19:27.029 { 00:19:27.029 "name": "spare", 00:19:27.029 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev2", 00:19:27.029 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev3", 00:19:27.029 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev4", 00:19:27.029 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 } 00:19:27.029 ] 00:19:27.029 }' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.029 "name": "raid_bdev1", 00:19:27.029 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:27.029 "strip_size_kb": 64, 00:19:27.029 "state": "online", 00:19:27.029 "raid_level": "raid5f", 00:19:27.029 "superblock": false, 00:19:27.029 "num_base_bdevs": 4, 00:19:27.029 "num_base_bdevs_discovered": 4, 00:19:27.029 "num_base_bdevs_operational": 4, 00:19:27.029 "base_bdevs_list": [ 00:19:27.029 { 00:19:27.029 "name": "spare", 00:19:27.029 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev2", 00:19:27.029 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev3", 00:19:27.029 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 }, 00:19:27.029 { 00:19:27.029 "name": "BaseBdev4", 00:19:27.029 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:27.029 "is_configured": true, 00:19:27.029 "data_offset": 0, 00:19:27.029 "data_size": 65536 00:19:27.029 } 00:19:27.029 ] 00:19:27.029 }' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.029 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.288 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.288 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.288 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.288 "name": "raid_bdev1", 00:19:27.289 "uuid": "ee5e1c24-a59f-417b-ada8-eb269937ee1d", 00:19:27.289 "strip_size_kb": 64, 00:19:27.289 "state": "online", 00:19:27.289 "raid_level": "raid5f", 00:19:27.289 "superblock": false, 00:19:27.289 "num_base_bdevs": 4, 00:19:27.289 "num_base_bdevs_discovered": 4, 00:19:27.289 "num_base_bdevs_operational": 4, 00:19:27.289 "base_bdevs_list": [ 00:19:27.289 { 00:19:27.289 "name": "spare", 00:19:27.289 "uuid": "5100c964-f2f7-5d28-b9fa-150c0a115cce", 00:19:27.289 "is_configured": true, 00:19:27.289 "data_offset": 0, 00:19:27.289 "data_size": 65536 00:19:27.289 }, 00:19:27.289 { 00:19:27.289 "name": "BaseBdev2", 00:19:27.289 "uuid": "07d4e263-1ade-51dc-a9d0-7e5767e17532", 00:19:27.289 "is_configured": true, 00:19:27.289 "data_offset": 0, 00:19:27.289 "data_size": 65536 00:19:27.289 }, 00:19:27.289 { 00:19:27.289 "name": "BaseBdev3", 00:19:27.289 "uuid": "a21cd94a-b067-51c9-b36c-a6daf243de97", 00:19:27.289 "is_configured": true, 00:19:27.289 "data_offset": 0, 00:19:27.289 "data_size": 65536 00:19:27.289 }, 00:19:27.289 { 00:19:27.289 "name": "BaseBdev4", 00:19:27.289 "uuid": "cc6903b8-0114-56a2-95e8-450769adca23", 00:19:27.289 "is_configured": true, 00:19:27.289 "data_offset": 0, 00:19:27.289 "data_size": 65536 00:19:27.289 } 00:19:27.289 ] 00:19:27.289 }' 00:19:27.289 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.289 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.855 [2024-11-20 08:52:58.531012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.855 [2024-11-20 08:52:58.531071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.855 [2024-11-20 08:52:58.531200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.855 [2024-11-20 08:52:58.531344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.855 [2024-11-20 08:52:58.531373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:27.855 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:28.114 /dev/nbd0 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.114 1+0 records in 00:19:28.114 1+0 records out 00:19:28.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602767 s, 6.8 MB/s 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.114 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:28.373 /dev/nbd1 00:19:28.373 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:28.373 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:28.373 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:28.373 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.374 1+0 records in 00:19:28.374 1+0 records out 00:19:28.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399922 s, 10.2 MB/s 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.374 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.633 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.893 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85005 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85005 ']' 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85005 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.153 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85005 00:19:29.412 killing process with pid 85005 00:19:29.412 Received shutdown signal, test time was about 60.000000 seconds 00:19:29.412 00:19:29.412 Latency(us) 00:19:29.412 [2024-11-20T08:53:00.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.412 [2024-11-20T08:53:00.328Z] =================================================================================================================== 00:19:29.412 [2024-11-20T08:53:00.328Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.412 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.412 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.412 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85005' 00:19:29.412 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85005 00:19:29.412 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85005 00:19:29.412 [2024-11-20 08:53:00.078452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:29.670 [2024-11-20 08:53:00.519064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.069 ************************************ 00:19:31.069 END TEST raid5f_rebuild_test 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:31.069 00:19:31.069 real 0m20.102s 00:19:31.069 user 0m24.955s 00:19:31.069 sys 0m2.287s 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.069 ************************************ 00:19:31.069 08:53:01 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:31.069 08:53:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:31.069 08:53:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.069 08:53:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.069 ************************************ 00:19:31.069 START TEST raid5f_rebuild_test_sb 00:19:31.069 ************************************ 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:31.069 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85514 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85514 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85514 ']' 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.070 08:53:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.070 [2024-11-20 08:53:01.730115] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:31.070 [2024-11-20 08:53:01.730784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:31.070 Zero copy mechanism will not be used. 00:19:31.070 -allocations --file-prefix=spdk_pid85514 ] 00:19:31.070 [2024-11-20 08:53:01.926496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.362 [2024-11-20 08:53:02.048682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.362 [2024-11-20 08:53:02.249648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.362 [2024-11-20 08:53:02.249683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.930 BaseBdev1_malloc 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.930 [2024-11-20 08:53:02.787765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:31.930 [2024-11-20 08:53:02.787864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.930 [2024-11-20 08:53:02.787925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:31.930 [2024-11-20 08:53:02.787941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.930 [2024-11-20 08:53:02.790971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.930 [2024-11-20 08:53:02.791038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.930 BaseBdev1 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.930 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.931 BaseBdev2_malloc 00:19:31.931 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.931 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:31.931 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.931 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.931 [2024-11-20 08:53:02.841684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:31.931 [2024-11-20 08:53:02.841759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.931 [2024-11-20 08:53:02.841787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:31.931 [2024-11-20 08:53:02.841807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.190 [2024-11-20 08:53:02.844904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.190 [2024-11-20 08:53:02.844971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:32.190 BaseBdev2 00:19:32.190 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.190 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:32.190 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:32.190 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.190 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.190 BaseBdev3_malloc 00:19:32.190 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 [2024-11-20 08:53:02.910228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:32.191 [2024-11-20 08:53:02.910332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.191 [2024-11-20 08:53:02.910362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:32.191 [2024-11-20 08:53:02.910394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.191 [2024-11-20 08:53:02.913363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.191 [2024-11-20 08:53:02.913414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:32.191 BaseBdev3 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 BaseBdev4_malloc 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 [2024-11-20 08:53:02.962966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:32.191 [2024-11-20 08:53:02.963046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.191 [2024-11-20 08:53:02.963074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:32.191 [2024-11-20 08:53:02.963091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.191 [2024-11-20 08:53:02.965979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.191 [2024-11-20 08:53:02.966047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:32.191 BaseBdev4 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 spare_malloc 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 spare_delay 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 [2024-11-20 08:53:03.021556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.191 [2024-11-20 08:53:03.021627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.191 [2024-11-20 08:53:03.021657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:32.191 [2024-11-20 08:53:03.021674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.191 [2024-11-20 08:53:03.024632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.191 [2024-11-20 08:53:03.024697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.191 spare 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 [2024-11-20 08:53:03.029668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.191 [2024-11-20 08:53:03.032224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.191 [2024-11-20 08:53:03.032358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:32.191 [2024-11-20 08:53:03.032444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:32.191 [2024-11-20 08:53:03.032734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:32.191 [2024-11-20 08:53:03.032757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:32.191 [2024-11-20 08:53:03.033043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:32.191 [2024-11-20 08:53:03.039488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:32.191 [2024-11-20 08:53:03.039530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:32.191 [2024-11-20 08:53:03.039828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.191 "name": "raid_bdev1", 00:19:32.191 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:32.191 "strip_size_kb": 64, 00:19:32.191 "state": "online", 00:19:32.191 "raid_level": "raid5f", 00:19:32.191 "superblock": true, 00:19:32.191 "num_base_bdevs": 4, 00:19:32.191 "num_base_bdevs_discovered": 4, 00:19:32.191 "num_base_bdevs_operational": 4, 00:19:32.191 "base_bdevs_list": [ 00:19:32.191 { 00:19:32.191 "name": "BaseBdev1", 00:19:32.191 "uuid": "3d1eb9dc-8c5c-5922-a62a-6e2db755c17b", 00:19:32.191 "is_configured": true, 00:19:32.191 "data_offset": 2048, 00:19:32.191 "data_size": 63488 00:19:32.191 }, 00:19:32.191 { 00:19:32.191 "name": "BaseBdev2", 00:19:32.191 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:32.191 "is_configured": true, 00:19:32.191 "data_offset": 2048, 00:19:32.191 "data_size": 63488 00:19:32.191 }, 00:19:32.191 { 00:19:32.191 "name": "BaseBdev3", 00:19:32.191 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:32.191 "is_configured": true, 00:19:32.191 "data_offset": 2048, 00:19:32.191 "data_size": 63488 00:19:32.191 }, 00:19:32.191 { 00:19:32.191 "name": "BaseBdev4", 00:19:32.191 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:32.191 "is_configured": true, 00:19:32.191 "data_offset": 2048, 00:19:32.191 "data_size": 63488 00:19:32.191 } 00:19:32.191 ] 00:19:32.191 }' 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.191 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.760 [2024-11-20 08:53:03.579841] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.760 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.761 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.761 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.020 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:33.279 [2024-11-20 08:53:03.979764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:33.279 /dev/nbd0 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.279 1+0 records in 00:19:33.279 1+0 records out 00:19:33.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003147 s, 13.0 MB/s 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:33.279 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:33.848 496+0 records in 00:19:33.848 496+0 records out 00:19:33.848 97517568 bytes (98 MB, 93 MiB) copied, 0.664921 s, 147 MB/s 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.848 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.416 [2024-11-20 08:53:05.048961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.416 [2024-11-20 08:53:05.061459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.416 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.416 "name": "raid_bdev1", 00:19:34.416 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:34.416 "strip_size_kb": 64, 00:19:34.416 "state": "online", 00:19:34.416 "raid_level": "raid5f", 00:19:34.416 "superblock": true, 00:19:34.416 "num_base_bdevs": 4, 00:19:34.416 "num_base_bdevs_discovered": 3, 00:19:34.416 "num_base_bdevs_operational": 3, 00:19:34.416 "base_bdevs_list": [ 00:19:34.416 { 00:19:34.416 "name": null, 00:19:34.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.416 "is_configured": false, 00:19:34.416 "data_offset": 0, 00:19:34.416 "data_size": 63488 00:19:34.416 }, 00:19:34.416 { 00:19:34.416 "name": "BaseBdev2", 00:19:34.416 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:34.416 "is_configured": true, 00:19:34.416 "data_offset": 2048, 00:19:34.416 "data_size": 63488 00:19:34.416 }, 00:19:34.416 { 00:19:34.416 "name": "BaseBdev3", 00:19:34.416 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:34.416 "is_configured": true, 00:19:34.416 "data_offset": 2048, 00:19:34.416 "data_size": 63488 00:19:34.416 }, 00:19:34.416 { 00:19:34.416 "name": "BaseBdev4", 00:19:34.416 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:34.416 "is_configured": true, 00:19:34.416 "data_offset": 2048, 00:19:34.416 "data_size": 63488 00:19:34.416 } 00:19:34.416 ] 00:19:34.416 }' 00:19:34.417 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.417 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.676 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.676 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.676 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.676 [2024-11-20 08:53:05.569648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.676 [2024-11-20 08:53:05.584279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:34.676 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.676 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:34.935 [2024-11-20 08:53:05.593424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.873 "name": "raid_bdev1", 00:19:35.873 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:35.873 "strip_size_kb": 64, 00:19:35.873 "state": "online", 00:19:35.873 "raid_level": "raid5f", 00:19:35.873 "superblock": true, 00:19:35.873 "num_base_bdevs": 4, 00:19:35.873 "num_base_bdevs_discovered": 4, 00:19:35.873 "num_base_bdevs_operational": 4, 00:19:35.873 "process": { 00:19:35.873 "type": "rebuild", 00:19:35.873 "target": "spare", 00:19:35.873 "progress": { 00:19:35.873 "blocks": 17280, 00:19:35.873 "percent": 9 00:19:35.873 } 00:19:35.873 }, 00:19:35.873 "base_bdevs_list": [ 00:19:35.873 { 00:19:35.873 "name": "spare", 00:19:35.873 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:35.873 "is_configured": true, 00:19:35.873 "data_offset": 2048, 00:19:35.873 "data_size": 63488 00:19:35.873 }, 00:19:35.873 { 00:19:35.873 "name": "BaseBdev2", 00:19:35.873 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:35.873 "is_configured": true, 00:19:35.873 "data_offset": 2048, 00:19:35.873 "data_size": 63488 00:19:35.873 }, 00:19:35.873 { 00:19:35.873 "name": "BaseBdev3", 00:19:35.873 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:35.873 "is_configured": true, 00:19:35.873 "data_offset": 2048, 00:19:35.873 "data_size": 63488 00:19:35.873 }, 00:19:35.873 { 00:19:35.873 "name": "BaseBdev4", 00:19:35.873 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:35.873 "is_configured": true, 00:19:35.873 "data_offset": 2048, 00:19:35.873 "data_size": 63488 00:19:35.873 } 00:19:35.873 ] 00:19:35.873 }' 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.873 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.873 [2024-11-20 08:53:06.755296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.132 [2024-11-20 08:53:06.805178] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:36.132 [2024-11-20 08:53:06.805374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.132 [2024-11-20 08:53:06.805406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.132 [2024-11-20 08:53:06.805427] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.132 "name": "raid_bdev1", 00:19:36.132 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:36.132 "strip_size_kb": 64, 00:19:36.132 "state": "online", 00:19:36.132 "raid_level": "raid5f", 00:19:36.132 "superblock": true, 00:19:36.132 "num_base_bdevs": 4, 00:19:36.132 "num_base_bdevs_discovered": 3, 00:19:36.132 "num_base_bdevs_operational": 3, 00:19:36.132 "base_bdevs_list": [ 00:19:36.132 { 00:19:36.132 "name": null, 00:19:36.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.132 "is_configured": false, 00:19:36.132 "data_offset": 0, 00:19:36.132 "data_size": 63488 00:19:36.132 }, 00:19:36.132 { 00:19:36.132 "name": "BaseBdev2", 00:19:36.132 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:36.132 "is_configured": true, 00:19:36.132 "data_offset": 2048, 00:19:36.132 "data_size": 63488 00:19:36.132 }, 00:19:36.132 { 00:19:36.132 "name": "BaseBdev3", 00:19:36.132 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:36.132 "is_configured": true, 00:19:36.132 "data_offset": 2048, 00:19:36.132 "data_size": 63488 00:19:36.132 }, 00:19:36.132 { 00:19:36.132 "name": "BaseBdev4", 00:19:36.132 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:36.132 "is_configured": true, 00:19:36.132 "data_offset": 2048, 00:19:36.132 "data_size": 63488 00:19:36.132 } 00:19:36.132 ] 00:19:36.132 }' 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.132 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.701 "name": "raid_bdev1", 00:19:36.701 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:36.701 "strip_size_kb": 64, 00:19:36.701 "state": "online", 00:19:36.701 "raid_level": "raid5f", 00:19:36.701 "superblock": true, 00:19:36.701 "num_base_bdevs": 4, 00:19:36.701 "num_base_bdevs_discovered": 3, 00:19:36.701 "num_base_bdevs_operational": 3, 00:19:36.701 "base_bdevs_list": [ 00:19:36.701 { 00:19:36.701 "name": null, 00:19:36.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.701 "is_configured": false, 00:19:36.701 "data_offset": 0, 00:19:36.701 "data_size": 63488 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "name": "BaseBdev2", 00:19:36.701 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:36.701 "is_configured": true, 00:19:36.701 "data_offset": 2048, 00:19:36.701 "data_size": 63488 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "name": "BaseBdev3", 00:19:36.701 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:36.701 "is_configured": true, 00:19:36.701 "data_offset": 2048, 00:19:36.701 "data_size": 63488 00:19:36.701 }, 00:19:36.701 { 00:19:36.701 "name": "BaseBdev4", 00:19:36.701 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:36.701 "is_configured": true, 00:19:36.701 "data_offset": 2048, 00:19:36.701 "data_size": 63488 00:19:36.701 } 00:19:36.701 ] 00:19:36.701 }' 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.701 [2024-11-20 08:53:07.509848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.701 [2024-11-20 08:53:07.523827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.701 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:36.701 [2024-11-20 08:53:07.533102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.637 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.896 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.896 "name": "raid_bdev1", 00:19:37.896 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:37.896 "strip_size_kb": 64, 00:19:37.896 "state": "online", 00:19:37.896 "raid_level": "raid5f", 00:19:37.896 "superblock": true, 00:19:37.896 "num_base_bdevs": 4, 00:19:37.896 "num_base_bdevs_discovered": 4, 00:19:37.896 "num_base_bdevs_operational": 4, 00:19:37.896 "process": { 00:19:37.896 "type": "rebuild", 00:19:37.896 "target": "spare", 00:19:37.896 "progress": { 00:19:37.896 "blocks": 17280, 00:19:37.896 "percent": 9 00:19:37.896 } 00:19:37.896 }, 00:19:37.896 "base_bdevs_list": [ 00:19:37.896 { 00:19:37.896 "name": "spare", 00:19:37.896 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:37.896 "is_configured": true, 00:19:37.896 "data_offset": 2048, 00:19:37.896 "data_size": 63488 00:19:37.896 }, 00:19:37.896 { 00:19:37.896 "name": "BaseBdev2", 00:19:37.896 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:37.896 "is_configured": true, 00:19:37.896 "data_offset": 2048, 00:19:37.896 "data_size": 63488 00:19:37.896 }, 00:19:37.896 { 00:19:37.896 "name": "BaseBdev3", 00:19:37.896 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:37.896 "is_configured": true, 00:19:37.896 "data_offset": 2048, 00:19:37.896 "data_size": 63488 00:19:37.896 }, 00:19:37.896 { 00:19:37.896 "name": "BaseBdev4", 00:19:37.896 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:37.896 "is_configured": true, 00:19:37.897 "data_offset": 2048, 00:19:37.897 "data_size": 63488 00:19:37.897 } 00:19:37.897 ] 00:19:37.897 }' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:37.897 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.897 "name": "raid_bdev1", 00:19:37.897 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:37.897 "strip_size_kb": 64, 00:19:37.897 "state": "online", 00:19:37.897 "raid_level": "raid5f", 00:19:37.897 "superblock": true, 00:19:37.897 "num_base_bdevs": 4, 00:19:37.897 "num_base_bdevs_discovered": 4, 00:19:37.897 "num_base_bdevs_operational": 4, 00:19:37.897 "process": { 00:19:37.897 "type": "rebuild", 00:19:37.897 "target": "spare", 00:19:37.897 "progress": { 00:19:37.897 "blocks": 21120, 00:19:37.897 "percent": 11 00:19:37.897 } 00:19:37.897 }, 00:19:37.897 "base_bdevs_list": [ 00:19:37.897 { 00:19:37.897 "name": "spare", 00:19:37.897 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:37.897 "is_configured": true, 00:19:37.897 "data_offset": 2048, 00:19:37.897 "data_size": 63488 00:19:37.897 }, 00:19:37.897 { 00:19:37.897 "name": "BaseBdev2", 00:19:37.897 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:37.897 "is_configured": true, 00:19:37.897 "data_offset": 2048, 00:19:37.897 "data_size": 63488 00:19:37.897 }, 00:19:37.897 { 00:19:37.897 "name": "BaseBdev3", 00:19:37.897 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:37.897 "is_configured": true, 00:19:37.897 "data_offset": 2048, 00:19:37.897 "data_size": 63488 00:19:37.897 }, 00:19:37.897 { 00:19:37.897 "name": "BaseBdev4", 00:19:37.897 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:37.897 "is_configured": true, 00:19:37.897 "data_offset": 2048, 00:19:37.897 "data_size": 63488 00:19:37.897 } 00:19:37.897 ] 00:19:37.897 }' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.897 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.156 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.156 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.092 "name": "raid_bdev1", 00:19:39.092 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:39.092 "strip_size_kb": 64, 00:19:39.092 "state": "online", 00:19:39.092 "raid_level": "raid5f", 00:19:39.092 "superblock": true, 00:19:39.092 "num_base_bdevs": 4, 00:19:39.092 "num_base_bdevs_discovered": 4, 00:19:39.092 "num_base_bdevs_operational": 4, 00:19:39.092 "process": { 00:19:39.092 "type": "rebuild", 00:19:39.092 "target": "spare", 00:19:39.092 "progress": { 00:19:39.092 "blocks": 44160, 00:19:39.092 "percent": 23 00:19:39.092 } 00:19:39.092 }, 00:19:39.092 "base_bdevs_list": [ 00:19:39.092 { 00:19:39.092 "name": "spare", 00:19:39.092 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:39.092 "is_configured": true, 00:19:39.092 "data_offset": 2048, 00:19:39.092 "data_size": 63488 00:19:39.092 }, 00:19:39.092 { 00:19:39.092 "name": "BaseBdev2", 00:19:39.092 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:39.092 "is_configured": true, 00:19:39.092 "data_offset": 2048, 00:19:39.092 "data_size": 63488 00:19:39.092 }, 00:19:39.092 { 00:19:39.092 "name": "BaseBdev3", 00:19:39.092 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:39.092 "is_configured": true, 00:19:39.092 "data_offset": 2048, 00:19:39.092 "data_size": 63488 00:19:39.092 }, 00:19:39.092 { 00:19:39.092 "name": "BaseBdev4", 00:19:39.092 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:39.092 "is_configured": true, 00:19:39.092 "data_offset": 2048, 00:19:39.092 "data_size": 63488 00:19:39.092 } 00:19:39.092 ] 00:19:39.092 }' 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.092 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.470 08:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.470 08:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.470 "name": "raid_bdev1", 00:19:40.470 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:40.470 "strip_size_kb": 64, 00:19:40.470 "state": "online", 00:19:40.470 "raid_level": "raid5f", 00:19:40.470 "superblock": true, 00:19:40.470 "num_base_bdevs": 4, 00:19:40.470 "num_base_bdevs_discovered": 4, 00:19:40.470 "num_base_bdevs_operational": 4, 00:19:40.470 "process": { 00:19:40.470 "type": "rebuild", 00:19:40.470 "target": "spare", 00:19:40.470 "progress": { 00:19:40.470 "blocks": 65280, 00:19:40.470 "percent": 34 00:19:40.470 } 00:19:40.470 }, 00:19:40.470 "base_bdevs_list": [ 00:19:40.470 { 00:19:40.470 "name": "spare", 00:19:40.470 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:40.470 "is_configured": true, 00:19:40.470 "data_offset": 2048, 00:19:40.470 "data_size": 63488 00:19:40.470 }, 00:19:40.470 { 00:19:40.470 "name": "BaseBdev2", 00:19:40.470 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:40.470 "is_configured": true, 00:19:40.470 "data_offset": 2048, 00:19:40.470 "data_size": 63488 00:19:40.470 }, 00:19:40.470 { 00:19:40.470 "name": "BaseBdev3", 00:19:40.470 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:40.470 "is_configured": true, 00:19:40.470 "data_offset": 2048, 00:19:40.470 "data_size": 63488 00:19:40.470 }, 00:19:40.470 { 00:19:40.470 "name": "BaseBdev4", 00:19:40.470 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:40.470 "is_configured": true, 00:19:40.470 "data_offset": 2048, 00:19:40.470 "data_size": 63488 00:19:40.470 } 00:19:40.470 ] 00:19:40.470 }' 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.470 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.407 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.407 "name": "raid_bdev1", 00:19:41.407 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:41.407 "strip_size_kb": 64, 00:19:41.407 "state": "online", 00:19:41.407 "raid_level": "raid5f", 00:19:41.407 "superblock": true, 00:19:41.407 "num_base_bdevs": 4, 00:19:41.407 "num_base_bdevs_discovered": 4, 00:19:41.407 "num_base_bdevs_operational": 4, 00:19:41.407 "process": { 00:19:41.407 "type": "rebuild", 00:19:41.407 "target": "spare", 00:19:41.407 "progress": { 00:19:41.407 "blocks": 88320, 00:19:41.407 "percent": 46 00:19:41.407 } 00:19:41.407 }, 00:19:41.407 "base_bdevs_list": [ 00:19:41.407 { 00:19:41.407 "name": "spare", 00:19:41.407 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:41.407 "is_configured": true, 00:19:41.407 "data_offset": 2048, 00:19:41.407 "data_size": 63488 00:19:41.407 }, 00:19:41.407 { 00:19:41.407 "name": "BaseBdev2", 00:19:41.407 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:41.407 "is_configured": true, 00:19:41.407 "data_offset": 2048, 00:19:41.407 "data_size": 63488 00:19:41.407 }, 00:19:41.407 { 00:19:41.408 "name": "BaseBdev3", 00:19:41.408 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:41.408 "is_configured": true, 00:19:41.408 "data_offset": 2048, 00:19:41.408 "data_size": 63488 00:19:41.408 }, 00:19:41.408 { 00:19:41.408 "name": "BaseBdev4", 00:19:41.408 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:41.408 "is_configured": true, 00:19:41.408 "data_offset": 2048, 00:19:41.408 "data_size": 63488 00:19:41.408 } 00:19:41.408 ] 00:19:41.408 }' 00:19:41.408 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.408 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.408 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.666 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.666 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.602 "name": "raid_bdev1", 00:19:42.602 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:42.602 "strip_size_kb": 64, 00:19:42.602 "state": "online", 00:19:42.602 "raid_level": "raid5f", 00:19:42.602 "superblock": true, 00:19:42.602 "num_base_bdevs": 4, 00:19:42.602 "num_base_bdevs_discovered": 4, 00:19:42.602 "num_base_bdevs_operational": 4, 00:19:42.602 "process": { 00:19:42.602 "type": "rebuild", 00:19:42.602 "target": "spare", 00:19:42.602 "progress": { 00:19:42.602 "blocks": 109440, 00:19:42.602 "percent": 57 00:19:42.602 } 00:19:42.602 }, 00:19:42.602 "base_bdevs_list": [ 00:19:42.602 { 00:19:42.602 "name": "spare", 00:19:42.602 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:42.602 "is_configured": true, 00:19:42.602 "data_offset": 2048, 00:19:42.602 "data_size": 63488 00:19:42.602 }, 00:19:42.602 { 00:19:42.602 "name": "BaseBdev2", 00:19:42.602 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:42.602 "is_configured": true, 00:19:42.602 "data_offset": 2048, 00:19:42.602 "data_size": 63488 00:19:42.602 }, 00:19:42.602 { 00:19:42.602 "name": "BaseBdev3", 00:19:42.602 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:42.602 "is_configured": true, 00:19:42.602 "data_offset": 2048, 00:19:42.602 "data_size": 63488 00:19:42.602 }, 00:19:42.602 { 00:19:42.602 "name": "BaseBdev4", 00:19:42.602 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:42.602 "is_configured": true, 00:19:42.602 "data_offset": 2048, 00:19:42.602 "data_size": 63488 00:19:42.602 } 00:19:42.602 ] 00:19:42.602 }' 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.602 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.979 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.979 "name": "raid_bdev1", 00:19:43.979 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:43.979 "strip_size_kb": 64, 00:19:43.979 "state": "online", 00:19:43.979 "raid_level": "raid5f", 00:19:43.979 "superblock": true, 00:19:43.979 "num_base_bdevs": 4, 00:19:43.979 "num_base_bdevs_discovered": 4, 00:19:43.979 "num_base_bdevs_operational": 4, 00:19:43.979 "process": { 00:19:43.979 "type": "rebuild", 00:19:43.979 "target": "spare", 00:19:43.979 "progress": { 00:19:43.979 "blocks": 132480, 00:19:43.979 "percent": 69 00:19:43.979 } 00:19:43.979 }, 00:19:43.979 "base_bdevs_list": [ 00:19:43.979 { 00:19:43.979 "name": "spare", 00:19:43.979 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:43.979 "is_configured": true, 00:19:43.979 "data_offset": 2048, 00:19:43.979 "data_size": 63488 00:19:43.979 }, 00:19:43.979 { 00:19:43.979 "name": "BaseBdev2", 00:19:43.979 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:43.979 "is_configured": true, 00:19:43.979 "data_offset": 2048, 00:19:43.979 "data_size": 63488 00:19:43.979 }, 00:19:43.979 { 00:19:43.979 "name": "BaseBdev3", 00:19:43.979 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:43.979 "is_configured": true, 00:19:43.979 "data_offset": 2048, 00:19:43.979 "data_size": 63488 00:19:43.979 }, 00:19:43.979 { 00:19:43.979 "name": "BaseBdev4", 00:19:43.979 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:43.979 "is_configured": true, 00:19:43.979 "data_offset": 2048, 00:19:43.979 "data_size": 63488 00:19:43.980 } 00:19:43.980 ] 00:19:43.980 }' 00:19:43.980 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.980 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.980 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.980 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.980 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.940 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.940 "name": "raid_bdev1", 00:19:44.940 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:44.940 "strip_size_kb": 64, 00:19:44.940 "state": "online", 00:19:44.940 "raid_level": "raid5f", 00:19:44.940 "superblock": true, 00:19:44.940 "num_base_bdevs": 4, 00:19:44.940 "num_base_bdevs_discovered": 4, 00:19:44.940 "num_base_bdevs_operational": 4, 00:19:44.940 "process": { 00:19:44.940 "type": "rebuild", 00:19:44.940 "target": "spare", 00:19:44.940 "progress": { 00:19:44.940 "blocks": 153600, 00:19:44.940 "percent": 80 00:19:44.940 } 00:19:44.940 }, 00:19:44.940 "base_bdevs_list": [ 00:19:44.940 { 00:19:44.941 "name": "spare", 00:19:44.941 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:44.941 "is_configured": true, 00:19:44.941 "data_offset": 2048, 00:19:44.941 "data_size": 63488 00:19:44.941 }, 00:19:44.941 { 00:19:44.941 "name": "BaseBdev2", 00:19:44.941 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:44.941 "is_configured": true, 00:19:44.941 "data_offset": 2048, 00:19:44.941 "data_size": 63488 00:19:44.941 }, 00:19:44.941 { 00:19:44.941 "name": "BaseBdev3", 00:19:44.941 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:44.941 "is_configured": true, 00:19:44.941 "data_offset": 2048, 00:19:44.941 "data_size": 63488 00:19:44.941 }, 00:19:44.941 { 00:19:44.941 "name": "BaseBdev4", 00:19:44.941 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:44.941 "is_configured": true, 00:19:44.941 "data_offset": 2048, 00:19:44.941 "data_size": 63488 00:19:44.941 } 00:19:44.941 ] 00:19:44.941 }' 00:19:44.941 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.941 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.941 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.941 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.941 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.319 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.319 "name": "raid_bdev1", 00:19:46.319 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:46.319 "strip_size_kb": 64, 00:19:46.319 "state": "online", 00:19:46.319 "raid_level": "raid5f", 00:19:46.319 "superblock": true, 00:19:46.319 "num_base_bdevs": 4, 00:19:46.319 "num_base_bdevs_discovered": 4, 00:19:46.319 "num_base_bdevs_operational": 4, 00:19:46.319 "process": { 00:19:46.319 "type": "rebuild", 00:19:46.319 "target": "spare", 00:19:46.319 "progress": { 00:19:46.319 "blocks": 176640, 00:19:46.319 "percent": 92 00:19:46.319 } 00:19:46.319 }, 00:19:46.319 "base_bdevs_list": [ 00:19:46.319 { 00:19:46.319 "name": "spare", 00:19:46.319 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:46.319 "is_configured": true, 00:19:46.319 "data_offset": 2048, 00:19:46.319 "data_size": 63488 00:19:46.319 }, 00:19:46.319 { 00:19:46.319 "name": "BaseBdev2", 00:19:46.319 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:46.319 "is_configured": true, 00:19:46.319 "data_offset": 2048, 00:19:46.319 "data_size": 63488 00:19:46.319 }, 00:19:46.319 { 00:19:46.319 "name": "BaseBdev3", 00:19:46.319 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:46.319 "is_configured": true, 00:19:46.319 "data_offset": 2048, 00:19:46.319 "data_size": 63488 00:19:46.319 }, 00:19:46.319 { 00:19:46.319 "name": "BaseBdev4", 00:19:46.319 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:46.319 "is_configured": true, 00:19:46.319 "data_offset": 2048, 00:19:46.319 "data_size": 63488 00:19:46.319 } 00:19:46.319 ] 00:19:46.319 }' 00:19:46.320 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.320 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.320 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.320 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.320 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.888 [2024-11-20 08:53:17.625723] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.888 [2024-11-20 08:53:17.625814] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.888 [2024-11-20 08:53:17.626019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.159 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.159 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.159 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.159 "name": "raid_bdev1", 00:19:47.159 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:47.159 "strip_size_kb": 64, 00:19:47.159 "state": "online", 00:19:47.159 "raid_level": "raid5f", 00:19:47.159 "superblock": true, 00:19:47.159 "num_base_bdevs": 4, 00:19:47.159 "num_base_bdevs_discovered": 4, 00:19:47.159 "num_base_bdevs_operational": 4, 00:19:47.159 "base_bdevs_list": [ 00:19:47.159 { 00:19:47.159 "name": "spare", 00:19:47.159 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:47.159 "is_configured": true, 00:19:47.159 "data_offset": 2048, 00:19:47.159 "data_size": 63488 00:19:47.159 }, 00:19:47.159 { 00:19:47.159 "name": "BaseBdev2", 00:19:47.159 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:47.159 "is_configured": true, 00:19:47.159 "data_offset": 2048, 00:19:47.159 "data_size": 63488 00:19:47.159 }, 00:19:47.159 { 00:19:47.159 "name": "BaseBdev3", 00:19:47.159 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:47.159 "is_configured": true, 00:19:47.159 "data_offset": 2048, 00:19:47.159 "data_size": 63488 00:19:47.159 }, 00:19:47.159 { 00:19:47.159 "name": "BaseBdev4", 00:19:47.159 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:47.159 "is_configured": true, 00:19:47.159 "data_offset": 2048, 00:19:47.159 "data_size": 63488 00:19:47.159 } 00:19:47.159 ] 00:19:47.159 }' 00:19:47.159 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.449 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.449 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.449 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:47.449 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:47.449 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.449 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.450 "name": "raid_bdev1", 00:19:47.450 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:47.450 "strip_size_kb": 64, 00:19:47.450 "state": "online", 00:19:47.450 "raid_level": "raid5f", 00:19:47.450 "superblock": true, 00:19:47.450 "num_base_bdevs": 4, 00:19:47.450 "num_base_bdevs_discovered": 4, 00:19:47.450 "num_base_bdevs_operational": 4, 00:19:47.450 "base_bdevs_list": [ 00:19:47.450 { 00:19:47.450 "name": "spare", 00:19:47.450 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:47.450 "is_configured": true, 00:19:47.450 "data_offset": 2048, 00:19:47.450 "data_size": 63488 00:19:47.450 }, 00:19:47.450 { 00:19:47.450 "name": "BaseBdev2", 00:19:47.450 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:47.450 "is_configured": true, 00:19:47.450 "data_offset": 2048, 00:19:47.450 "data_size": 63488 00:19:47.450 }, 00:19:47.450 { 00:19:47.450 "name": "BaseBdev3", 00:19:47.450 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:47.450 "is_configured": true, 00:19:47.450 "data_offset": 2048, 00:19:47.450 "data_size": 63488 00:19:47.450 }, 00:19:47.450 { 00:19:47.450 "name": "BaseBdev4", 00:19:47.450 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:47.450 "is_configured": true, 00:19:47.450 "data_offset": 2048, 00:19:47.450 "data_size": 63488 00:19:47.450 } 00:19:47.450 ] 00:19:47.450 }' 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.450 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.709 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.709 "name": "raid_bdev1", 00:19:47.709 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:47.709 "strip_size_kb": 64, 00:19:47.709 "state": "online", 00:19:47.709 "raid_level": "raid5f", 00:19:47.709 "superblock": true, 00:19:47.709 "num_base_bdevs": 4, 00:19:47.709 "num_base_bdevs_discovered": 4, 00:19:47.709 "num_base_bdevs_operational": 4, 00:19:47.709 "base_bdevs_list": [ 00:19:47.709 { 00:19:47.709 "name": "spare", 00:19:47.709 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:47.709 "is_configured": true, 00:19:47.709 "data_offset": 2048, 00:19:47.709 "data_size": 63488 00:19:47.709 }, 00:19:47.709 { 00:19:47.709 "name": "BaseBdev2", 00:19:47.709 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:47.709 "is_configured": true, 00:19:47.709 "data_offset": 2048, 00:19:47.709 "data_size": 63488 00:19:47.709 }, 00:19:47.709 { 00:19:47.709 "name": "BaseBdev3", 00:19:47.709 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:47.709 "is_configured": true, 00:19:47.709 "data_offset": 2048, 00:19:47.709 "data_size": 63488 00:19:47.709 }, 00:19:47.709 { 00:19:47.709 "name": "BaseBdev4", 00:19:47.709 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:47.709 "is_configured": true, 00:19:47.709 "data_offset": 2048, 00:19:47.709 "data_size": 63488 00:19:47.709 } 00:19:47.709 ] 00:19:47.709 }' 00:19:47.709 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.709 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.968 [2024-11-20 08:53:18.857657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.968 [2024-11-20 08:53:18.857866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.968 [2024-11-20 08:53:18.857992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.968 [2024-11-20 08:53:18.858118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.968 [2024-11-20 08:53:18.858148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.968 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.228 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:48.486 /dev/nbd0 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.486 1+0 records in 00:19:48.486 1+0 records out 00:19:48.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261547 s, 15.7 MB/s 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.486 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:48.744 /dev/nbd1 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.744 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.744 1+0 records in 00:19:48.744 1+0 records out 00:19:48.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354483 s, 11.6 MB/s 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.745 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.016 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.278 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.537 [2024-11-20 08:53:20.249778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.537 [2024-11-20 08:53:20.249850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.537 [2024-11-20 08:53:20.249888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:49.537 [2024-11-20 08:53:20.249903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.537 [2024-11-20 08:53:20.252813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.537 [2024-11-20 08:53:20.252862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.537 [2024-11-20 08:53:20.253000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:49.537 [2024-11-20 08:53:20.253063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.537 spare 00:19:49.537 [2024-11-20 08:53:20.253296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.537 [2024-11-20 08:53:20.253438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:49.537 [2024-11-20 08:53:20.253558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.537 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.538 [2024-11-20 08:53:20.353695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:49.538 [2024-11-20 08:53:20.354035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:49.538 [2024-11-20 08:53:20.354534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:49.538 [2024-11-20 08:53:20.361123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:49.538 [2024-11-20 08:53:20.361350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:49.538 [2024-11-20 08:53:20.361874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.538 "name": "raid_bdev1", 00:19:49.538 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:49.538 "strip_size_kb": 64, 00:19:49.538 "state": "online", 00:19:49.538 "raid_level": "raid5f", 00:19:49.538 "superblock": true, 00:19:49.538 "num_base_bdevs": 4, 00:19:49.538 "num_base_bdevs_discovered": 4, 00:19:49.538 "num_base_bdevs_operational": 4, 00:19:49.538 "base_bdevs_list": [ 00:19:49.538 { 00:19:49.538 "name": "spare", 00:19:49.538 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:49.538 "is_configured": true, 00:19:49.538 "data_offset": 2048, 00:19:49.538 "data_size": 63488 00:19:49.538 }, 00:19:49.538 { 00:19:49.538 "name": "BaseBdev2", 00:19:49.538 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:49.538 "is_configured": true, 00:19:49.538 "data_offset": 2048, 00:19:49.538 "data_size": 63488 00:19:49.538 }, 00:19:49.538 { 00:19:49.538 "name": "BaseBdev3", 00:19:49.538 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:49.538 "is_configured": true, 00:19:49.538 "data_offset": 2048, 00:19:49.538 "data_size": 63488 00:19:49.538 }, 00:19:49.538 { 00:19:49.538 "name": "BaseBdev4", 00:19:49.538 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:49.538 "is_configured": true, 00:19:49.538 "data_offset": 2048, 00:19:49.538 "data_size": 63488 00:19:49.538 } 00:19:49.538 ] 00:19:49.538 }' 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.538 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.106 "name": "raid_bdev1", 00:19:50.106 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:50.106 "strip_size_kb": 64, 00:19:50.106 "state": "online", 00:19:50.106 "raid_level": "raid5f", 00:19:50.106 "superblock": true, 00:19:50.106 "num_base_bdevs": 4, 00:19:50.106 "num_base_bdevs_discovered": 4, 00:19:50.106 "num_base_bdevs_operational": 4, 00:19:50.106 "base_bdevs_list": [ 00:19:50.106 { 00:19:50.106 "name": "spare", 00:19:50.106 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:50.106 "is_configured": true, 00:19:50.106 "data_offset": 2048, 00:19:50.106 "data_size": 63488 00:19:50.106 }, 00:19:50.106 { 00:19:50.106 "name": "BaseBdev2", 00:19:50.106 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:50.106 "is_configured": true, 00:19:50.106 "data_offset": 2048, 00:19:50.106 "data_size": 63488 00:19:50.106 }, 00:19:50.106 { 00:19:50.106 "name": "BaseBdev3", 00:19:50.106 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:50.106 "is_configured": true, 00:19:50.106 "data_offset": 2048, 00:19:50.106 "data_size": 63488 00:19:50.106 }, 00:19:50.106 { 00:19:50.106 "name": "BaseBdev4", 00:19:50.106 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:50.106 "is_configured": true, 00:19:50.106 "data_offset": 2048, 00:19:50.106 "data_size": 63488 00:19:50.106 } 00:19:50.106 ] 00:19:50.106 }' 00:19:50.106 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.365 [2024-11-20 08:53:21.141663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.365 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.366 "name": "raid_bdev1", 00:19:50.366 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:50.366 "strip_size_kb": 64, 00:19:50.366 "state": "online", 00:19:50.366 "raid_level": "raid5f", 00:19:50.366 "superblock": true, 00:19:50.366 "num_base_bdevs": 4, 00:19:50.366 "num_base_bdevs_discovered": 3, 00:19:50.366 "num_base_bdevs_operational": 3, 00:19:50.366 "base_bdevs_list": [ 00:19:50.366 { 00:19:50.366 "name": null, 00:19:50.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.366 "is_configured": false, 00:19:50.366 "data_offset": 0, 00:19:50.366 "data_size": 63488 00:19:50.366 }, 00:19:50.366 { 00:19:50.366 "name": "BaseBdev2", 00:19:50.366 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:50.366 "is_configured": true, 00:19:50.366 "data_offset": 2048, 00:19:50.366 "data_size": 63488 00:19:50.366 }, 00:19:50.366 { 00:19:50.366 "name": "BaseBdev3", 00:19:50.366 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:50.366 "is_configured": true, 00:19:50.366 "data_offset": 2048, 00:19:50.366 "data_size": 63488 00:19:50.366 }, 00:19:50.366 { 00:19:50.366 "name": "BaseBdev4", 00:19:50.366 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:50.366 "is_configured": true, 00:19:50.366 "data_offset": 2048, 00:19:50.366 "data_size": 63488 00:19:50.366 } 00:19:50.366 ] 00:19:50.366 }' 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.366 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.941 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.941 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.941 [2024-11-20 08:53:21.657847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.941 [2024-11-20 08:53:21.658090] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:50.941 [2024-11-20 08:53:21.658119] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:50.941 [2024-11-20 08:53:21.658230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.941 [2024-11-20 08:53:21.672187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:50.941 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.941 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:50.941 [2024-11-20 08:53:21.681105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.879 "name": "raid_bdev1", 00:19:51.879 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:51.879 "strip_size_kb": 64, 00:19:51.879 "state": "online", 00:19:51.879 "raid_level": "raid5f", 00:19:51.879 "superblock": true, 00:19:51.879 "num_base_bdevs": 4, 00:19:51.879 "num_base_bdevs_discovered": 4, 00:19:51.879 "num_base_bdevs_operational": 4, 00:19:51.879 "process": { 00:19:51.879 "type": "rebuild", 00:19:51.879 "target": "spare", 00:19:51.879 "progress": { 00:19:51.879 "blocks": 17280, 00:19:51.879 "percent": 9 00:19:51.879 } 00:19:51.879 }, 00:19:51.879 "base_bdevs_list": [ 00:19:51.879 { 00:19:51.879 "name": "spare", 00:19:51.879 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:51.879 "is_configured": true, 00:19:51.879 "data_offset": 2048, 00:19:51.879 "data_size": 63488 00:19:51.879 }, 00:19:51.879 { 00:19:51.879 "name": "BaseBdev2", 00:19:51.879 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:51.879 "is_configured": true, 00:19:51.879 "data_offset": 2048, 00:19:51.879 "data_size": 63488 00:19:51.879 }, 00:19:51.879 { 00:19:51.879 "name": "BaseBdev3", 00:19:51.879 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:51.879 "is_configured": true, 00:19:51.879 "data_offset": 2048, 00:19:51.879 "data_size": 63488 00:19:51.879 }, 00:19:51.879 { 00:19:51.879 "name": "BaseBdev4", 00:19:51.879 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:51.879 "is_configured": true, 00:19:51.879 "data_offset": 2048, 00:19:51.879 "data_size": 63488 00:19:51.879 } 00:19:51.879 ] 00:19:51.879 }' 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.879 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.152 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.152 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.152 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.152 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.152 [2024-11-20 08:53:22.842551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.152 [2024-11-20 08:53:22.892969] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.152 [2024-11-20 08:53:22.893260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.152 [2024-11-20 08:53:22.893300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.152 [2024-11-20 08:53:22.893319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.153 "name": "raid_bdev1", 00:19:52.153 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:52.153 "strip_size_kb": 64, 00:19:52.153 "state": "online", 00:19:52.153 "raid_level": "raid5f", 00:19:52.153 "superblock": true, 00:19:52.153 "num_base_bdevs": 4, 00:19:52.153 "num_base_bdevs_discovered": 3, 00:19:52.153 "num_base_bdevs_operational": 3, 00:19:52.153 "base_bdevs_list": [ 00:19:52.153 { 00:19:52.153 "name": null, 00:19:52.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.153 "is_configured": false, 00:19:52.153 "data_offset": 0, 00:19:52.153 "data_size": 63488 00:19:52.153 }, 00:19:52.153 { 00:19:52.153 "name": "BaseBdev2", 00:19:52.153 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:52.153 "is_configured": true, 00:19:52.153 "data_offset": 2048, 00:19:52.153 "data_size": 63488 00:19:52.153 }, 00:19:52.153 { 00:19:52.153 "name": "BaseBdev3", 00:19:52.153 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:52.153 "is_configured": true, 00:19:52.153 "data_offset": 2048, 00:19:52.153 "data_size": 63488 00:19:52.153 }, 00:19:52.153 { 00:19:52.153 "name": "BaseBdev4", 00:19:52.153 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:52.153 "is_configured": true, 00:19:52.153 "data_offset": 2048, 00:19:52.153 "data_size": 63488 00:19:52.153 } 00:19:52.153 ] 00:19:52.153 }' 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.153 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.719 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:52.719 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.719 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.719 [2024-11-20 08:53:23.457034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.719 [2024-11-20 08:53:23.457277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.719 [2024-11-20 08:53:23.457325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:52.719 [2024-11-20 08:53:23.457346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.719 [2024-11-20 08:53:23.457959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.719 [2024-11-20 08:53:23.458010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.719 [2024-11-20 08:53:23.458161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:52.719 [2024-11-20 08:53:23.458188] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:52.719 [2024-11-20 08:53:23.458202] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:52.719 [2024-11-20 08:53:23.458252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.719 spare 00:19:52.719 [2024-11-20 08:53:23.471689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:52.719 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.719 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:52.719 [2024-11-20 08:53:23.480424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.655 "name": "raid_bdev1", 00:19:53.655 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:53.655 "strip_size_kb": 64, 00:19:53.655 "state": "online", 00:19:53.655 "raid_level": "raid5f", 00:19:53.655 "superblock": true, 00:19:53.655 "num_base_bdevs": 4, 00:19:53.655 "num_base_bdevs_discovered": 4, 00:19:53.655 "num_base_bdevs_operational": 4, 00:19:53.655 "process": { 00:19:53.655 "type": "rebuild", 00:19:53.655 "target": "spare", 00:19:53.655 "progress": { 00:19:53.655 "blocks": 17280, 00:19:53.655 "percent": 9 00:19:53.655 } 00:19:53.655 }, 00:19:53.655 "base_bdevs_list": [ 00:19:53.655 { 00:19:53.655 "name": "spare", 00:19:53.655 "uuid": "555f9286-fb95-5535-8788-23b4b0cd3f92", 00:19:53.655 "is_configured": true, 00:19:53.655 "data_offset": 2048, 00:19:53.655 "data_size": 63488 00:19:53.655 }, 00:19:53.655 { 00:19:53.655 "name": "BaseBdev2", 00:19:53.655 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:53.655 "is_configured": true, 00:19:53.655 "data_offset": 2048, 00:19:53.655 "data_size": 63488 00:19:53.655 }, 00:19:53.655 { 00:19:53.655 "name": "BaseBdev3", 00:19:53.655 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:53.655 "is_configured": true, 00:19:53.655 "data_offset": 2048, 00:19:53.655 "data_size": 63488 00:19:53.655 }, 00:19:53.655 { 00:19:53.655 "name": "BaseBdev4", 00:19:53.655 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:53.655 "is_configured": true, 00:19:53.655 "data_offset": 2048, 00:19:53.655 "data_size": 63488 00:19:53.655 } 00:19:53.655 ] 00:19:53.655 }' 00:19:53.655 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.914 [2024-11-20 08:53:24.641553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.914 [2024-11-20 08:53:24.691298] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:53.914 [2024-11-20 08:53:24.691393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.914 [2024-11-20 08:53:24.691425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.914 [2024-11-20 08:53:24.691436] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.914 "name": "raid_bdev1", 00:19:53.914 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:53.914 "strip_size_kb": 64, 00:19:53.914 "state": "online", 00:19:53.914 "raid_level": "raid5f", 00:19:53.914 "superblock": true, 00:19:53.914 "num_base_bdevs": 4, 00:19:53.914 "num_base_bdevs_discovered": 3, 00:19:53.914 "num_base_bdevs_operational": 3, 00:19:53.914 "base_bdevs_list": [ 00:19:53.914 { 00:19:53.914 "name": null, 00:19:53.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.914 "is_configured": false, 00:19:53.914 "data_offset": 0, 00:19:53.914 "data_size": 63488 00:19:53.914 }, 00:19:53.914 { 00:19:53.914 "name": "BaseBdev2", 00:19:53.914 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:53.914 "is_configured": true, 00:19:53.914 "data_offset": 2048, 00:19:53.914 "data_size": 63488 00:19:53.914 }, 00:19:53.914 { 00:19:53.914 "name": "BaseBdev3", 00:19:53.914 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:53.914 "is_configured": true, 00:19:53.914 "data_offset": 2048, 00:19:53.914 "data_size": 63488 00:19:53.914 }, 00:19:53.914 { 00:19:53.914 "name": "BaseBdev4", 00:19:53.914 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:53.914 "is_configured": true, 00:19:53.914 "data_offset": 2048, 00:19:53.914 "data_size": 63488 00:19:53.914 } 00:19:53.914 ] 00:19:53.914 }' 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.914 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.481 "name": "raid_bdev1", 00:19:54.481 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:54.481 "strip_size_kb": 64, 00:19:54.481 "state": "online", 00:19:54.481 "raid_level": "raid5f", 00:19:54.481 "superblock": true, 00:19:54.481 "num_base_bdevs": 4, 00:19:54.481 "num_base_bdevs_discovered": 3, 00:19:54.481 "num_base_bdevs_operational": 3, 00:19:54.481 "base_bdevs_list": [ 00:19:54.481 { 00:19:54.481 "name": null, 00:19:54.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.481 "is_configured": false, 00:19:54.481 "data_offset": 0, 00:19:54.481 "data_size": 63488 00:19:54.481 }, 00:19:54.481 { 00:19:54.481 "name": "BaseBdev2", 00:19:54.481 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:54.481 "is_configured": true, 00:19:54.481 "data_offset": 2048, 00:19:54.481 "data_size": 63488 00:19:54.481 }, 00:19:54.481 { 00:19:54.481 "name": "BaseBdev3", 00:19:54.481 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:54.481 "is_configured": true, 00:19:54.481 "data_offset": 2048, 00:19:54.481 "data_size": 63488 00:19:54.481 }, 00:19:54.481 { 00:19:54.481 "name": "BaseBdev4", 00:19:54.481 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:54.481 "is_configured": true, 00:19:54.481 "data_offset": 2048, 00:19:54.481 "data_size": 63488 00:19:54.481 } 00:19:54.481 ] 00:19:54.481 }' 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.481 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 [2024-11-20 08:53:25.446296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:54.739 [2024-11-20 08:53:25.446352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.739 [2024-11-20 08:53:25.446384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:54.739 [2024-11-20 08:53:25.446398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.739 [2024-11-20 08:53:25.447010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.739 [2024-11-20 08:53:25.447044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.739 [2024-11-20 08:53:25.447191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:54.739 [2024-11-20 08:53:25.447214] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:54.739 [2024-11-20 08:53:25.447232] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:54.739 [2024-11-20 08:53:25.447246] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:54.739 BaseBdev1 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.739 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.675 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.676 "name": "raid_bdev1", 00:19:55.676 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:55.676 "strip_size_kb": 64, 00:19:55.676 "state": "online", 00:19:55.676 "raid_level": "raid5f", 00:19:55.676 "superblock": true, 00:19:55.676 "num_base_bdevs": 4, 00:19:55.676 "num_base_bdevs_discovered": 3, 00:19:55.676 "num_base_bdevs_operational": 3, 00:19:55.676 "base_bdevs_list": [ 00:19:55.676 { 00:19:55.676 "name": null, 00:19:55.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.676 "is_configured": false, 00:19:55.676 "data_offset": 0, 00:19:55.676 "data_size": 63488 00:19:55.676 }, 00:19:55.676 { 00:19:55.676 "name": "BaseBdev2", 00:19:55.676 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:55.676 "is_configured": true, 00:19:55.676 "data_offset": 2048, 00:19:55.676 "data_size": 63488 00:19:55.676 }, 00:19:55.676 { 00:19:55.676 "name": "BaseBdev3", 00:19:55.676 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:55.676 "is_configured": true, 00:19:55.676 "data_offset": 2048, 00:19:55.676 "data_size": 63488 00:19:55.676 }, 00:19:55.676 { 00:19:55.676 "name": "BaseBdev4", 00:19:55.676 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:55.676 "is_configured": true, 00:19:55.676 "data_offset": 2048, 00:19:55.676 "data_size": 63488 00:19:55.676 } 00:19:55.676 ] 00:19:55.676 }' 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.676 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.243 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.243 "name": "raid_bdev1", 00:19:56.243 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:56.243 "strip_size_kb": 64, 00:19:56.243 "state": "online", 00:19:56.243 "raid_level": "raid5f", 00:19:56.243 "superblock": true, 00:19:56.243 "num_base_bdevs": 4, 00:19:56.243 "num_base_bdevs_discovered": 3, 00:19:56.243 "num_base_bdevs_operational": 3, 00:19:56.243 "base_bdevs_list": [ 00:19:56.243 { 00:19:56.243 "name": null, 00:19:56.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.243 "is_configured": false, 00:19:56.243 "data_offset": 0, 00:19:56.243 "data_size": 63488 00:19:56.243 }, 00:19:56.243 { 00:19:56.243 "name": "BaseBdev2", 00:19:56.243 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:56.243 "is_configured": true, 00:19:56.243 "data_offset": 2048, 00:19:56.243 "data_size": 63488 00:19:56.243 }, 00:19:56.243 { 00:19:56.243 "name": "BaseBdev3", 00:19:56.243 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:56.243 "is_configured": true, 00:19:56.243 "data_offset": 2048, 00:19:56.243 "data_size": 63488 00:19:56.243 }, 00:19:56.243 { 00:19:56.243 "name": "BaseBdev4", 00:19:56.243 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:56.243 "is_configured": true, 00:19:56.243 "data_offset": 2048, 00:19:56.243 "data_size": 63488 00:19:56.243 } 00:19:56.243 ] 00:19:56.243 }' 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.243 [2024-11-20 08:53:27.119036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.243 [2024-11-20 08:53:27.119667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.243 [2024-11-20 08:53:27.119698] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:56.243 request: 00:19:56.243 { 00:19:56.243 "base_bdev": "BaseBdev1", 00:19:56.243 "raid_bdev": "raid_bdev1", 00:19:56.243 "method": "bdev_raid_add_base_bdev", 00:19:56.243 "req_id": 1 00:19:56.243 } 00:19:56.243 Got JSON-RPC error response 00:19:56.243 response: 00:19:56.243 { 00:19:56.243 "code": -22, 00:19:56.243 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:56.243 } 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:56.243 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:57.616 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:57.616 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.616 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.616 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.617 "name": "raid_bdev1", 00:19:57.617 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:57.617 "strip_size_kb": 64, 00:19:57.617 "state": "online", 00:19:57.617 "raid_level": "raid5f", 00:19:57.617 "superblock": true, 00:19:57.617 "num_base_bdevs": 4, 00:19:57.617 "num_base_bdevs_discovered": 3, 00:19:57.617 "num_base_bdevs_operational": 3, 00:19:57.617 "base_bdevs_list": [ 00:19:57.617 { 00:19:57.617 "name": null, 00:19:57.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.617 "is_configured": false, 00:19:57.617 "data_offset": 0, 00:19:57.617 "data_size": 63488 00:19:57.617 }, 00:19:57.617 { 00:19:57.617 "name": "BaseBdev2", 00:19:57.617 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:57.617 "is_configured": true, 00:19:57.617 "data_offset": 2048, 00:19:57.617 "data_size": 63488 00:19:57.617 }, 00:19:57.617 { 00:19:57.617 "name": "BaseBdev3", 00:19:57.617 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:57.617 "is_configured": true, 00:19:57.617 "data_offset": 2048, 00:19:57.617 "data_size": 63488 00:19:57.617 }, 00:19:57.617 { 00:19:57.617 "name": "BaseBdev4", 00:19:57.617 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:57.617 "is_configured": true, 00:19:57.617 "data_offset": 2048, 00:19:57.617 "data_size": 63488 00:19:57.617 } 00:19:57.617 ] 00:19:57.617 }' 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.617 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.875 "name": "raid_bdev1", 00:19:57.875 "uuid": "c01a8e79-ebd2-4680-b31d-cfce880438b7", 00:19:57.875 "strip_size_kb": 64, 00:19:57.875 "state": "online", 00:19:57.875 "raid_level": "raid5f", 00:19:57.875 "superblock": true, 00:19:57.875 "num_base_bdevs": 4, 00:19:57.875 "num_base_bdevs_discovered": 3, 00:19:57.875 "num_base_bdevs_operational": 3, 00:19:57.875 "base_bdevs_list": [ 00:19:57.875 { 00:19:57.875 "name": null, 00:19:57.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.875 "is_configured": false, 00:19:57.875 "data_offset": 0, 00:19:57.875 "data_size": 63488 00:19:57.875 }, 00:19:57.875 { 00:19:57.875 "name": "BaseBdev2", 00:19:57.875 "uuid": "02b6fd40-bea9-53f0-b035-8896f5dde48a", 00:19:57.875 "is_configured": true, 00:19:57.875 "data_offset": 2048, 00:19:57.875 "data_size": 63488 00:19:57.875 }, 00:19:57.875 { 00:19:57.875 "name": "BaseBdev3", 00:19:57.875 "uuid": "8da2eaa4-acec-5daf-a7e2-ad204c1ff51d", 00:19:57.875 "is_configured": true, 00:19:57.875 "data_offset": 2048, 00:19:57.875 "data_size": 63488 00:19:57.875 }, 00:19:57.875 { 00:19:57.875 "name": "BaseBdev4", 00:19:57.875 "uuid": "d90a60d2-9c5b-52fd-9130-9313e9a1d4c6", 00:19:57.875 "is_configured": true, 00:19:57.875 "data_offset": 2048, 00:19:57.875 "data_size": 63488 00:19:57.875 } 00:19:57.875 ] 00:19:57.875 }' 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.875 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85514 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85514 ']' 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85514 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85514 00:19:58.134 killing process with pid 85514 00:19:58.134 Received shutdown signal, test time was about 60.000000 seconds 00:19:58.134 00:19:58.134 Latency(us) 00:19:58.134 [2024-11-20T08:53:29.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.134 [2024-11-20T08:53:29.050Z] =================================================================================================================== 00:19:58.134 [2024-11-20T08:53:29.050Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85514' 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85514 00:19:58.134 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85514 00:19:58.134 [2024-11-20 08:53:28.863082] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.134 [2024-11-20 08:53:28.863301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.134 [2024-11-20 08:53:28.863414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.134 [2024-11-20 08:53:28.863437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:58.393 [2024-11-20 08:53:29.293093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:59.769 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:59.769 00:19:59.769 real 0m28.698s 00:19:59.769 user 0m37.401s 00:19:59.769 sys 0m2.979s 00:19:59.769 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.769 ************************************ 00:19:59.769 END TEST raid5f_rebuild_test_sb 00:19:59.769 ************************************ 00:19:59.769 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.769 08:53:30 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:59.769 08:53:30 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:59.769 08:53:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:59.769 08:53:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.769 08:53:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:59.769 ************************************ 00:19:59.769 START TEST raid_state_function_test_sb_4k 00:19:59.769 ************************************ 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:59.769 Process raid pid: 86337 00:19:59.769 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86337 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86337' 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86337 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86337 ']' 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.770 08:53:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.770 [2024-11-20 08:53:30.487030] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:19:59.770 [2024-11-20 08:53:30.487481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.770 [2024-11-20 08:53:30.670867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.029 [2024-11-20 08:53:30.792055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.288 [2024-11-20 08:53:30.999734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.288 [2024-11-20 08:53:30.999772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.547 [2024-11-20 08:53:31.422372] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.547 [2024-11-20 08:53:31.422579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.547 [2024-11-20 08:53:31.422712] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.547 [2024-11-20 08:53:31.422860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.547 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.805 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.805 "name": "Existed_Raid", 00:20:00.805 "uuid": "a2b51e98-7347-40a9-a4cd-11f74c805382", 00:20:00.805 "strip_size_kb": 0, 00:20:00.805 "state": "configuring", 00:20:00.805 "raid_level": "raid1", 00:20:00.805 "superblock": true, 00:20:00.805 "num_base_bdevs": 2, 00:20:00.805 "num_base_bdevs_discovered": 0, 00:20:00.805 "num_base_bdevs_operational": 2, 00:20:00.805 "base_bdevs_list": [ 00:20:00.805 { 00:20:00.805 "name": "BaseBdev1", 00:20:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.805 "is_configured": false, 00:20:00.805 "data_offset": 0, 00:20:00.805 "data_size": 0 00:20:00.805 }, 00:20:00.805 { 00:20:00.805 "name": "BaseBdev2", 00:20:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.805 "is_configured": false, 00:20:00.805 "data_offset": 0, 00:20:00.805 "data_size": 0 00:20:00.805 } 00:20:00.805 ] 00:20:00.805 }' 00:20:00.805 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.805 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.062 [2024-11-20 08:53:31.954484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.062 [2024-11-20 08:53:31.954734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.062 [2024-11-20 08:53:31.962480] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:01.062 [2024-11-20 08:53:31.962659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:01.062 [2024-11-20 08:53:31.962826] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:01.062 [2024-11-20 08:53:31.962895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.062 08:53:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.320 [2024-11-20 08:53:32.006290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.320 BaseBdev1 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:01.320 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.321 [ 00:20:01.321 { 00:20:01.321 "name": "BaseBdev1", 00:20:01.321 "aliases": [ 00:20:01.321 "4bcf8cec-2d0e-44f8-b268-caf46ea916df" 00:20:01.321 ], 00:20:01.321 "product_name": "Malloc disk", 00:20:01.321 "block_size": 4096, 00:20:01.321 "num_blocks": 8192, 00:20:01.321 "uuid": "4bcf8cec-2d0e-44f8-b268-caf46ea916df", 00:20:01.321 "assigned_rate_limits": { 00:20:01.321 "rw_ios_per_sec": 0, 00:20:01.321 "rw_mbytes_per_sec": 0, 00:20:01.321 "r_mbytes_per_sec": 0, 00:20:01.321 "w_mbytes_per_sec": 0 00:20:01.321 }, 00:20:01.321 "claimed": true, 00:20:01.321 "claim_type": "exclusive_write", 00:20:01.321 "zoned": false, 00:20:01.321 "supported_io_types": { 00:20:01.321 "read": true, 00:20:01.321 "write": true, 00:20:01.321 "unmap": true, 00:20:01.321 "flush": true, 00:20:01.321 "reset": true, 00:20:01.321 "nvme_admin": false, 00:20:01.321 "nvme_io": false, 00:20:01.321 "nvme_io_md": false, 00:20:01.321 "write_zeroes": true, 00:20:01.321 "zcopy": true, 00:20:01.321 "get_zone_info": false, 00:20:01.321 "zone_management": false, 00:20:01.321 "zone_append": false, 00:20:01.321 "compare": false, 00:20:01.321 "compare_and_write": false, 00:20:01.321 "abort": true, 00:20:01.321 "seek_hole": false, 00:20:01.321 "seek_data": false, 00:20:01.321 "copy": true, 00:20:01.321 "nvme_iov_md": false 00:20:01.321 }, 00:20:01.321 "memory_domains": [ 00:20:01.321 { 00:20:01.321 "dma_device_id": "system", 00:20:01.321 "dma_device_type": 1 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.321 "dma_device_type": 2 00:20:01.321 } 00:20:01.321 ], 00:20:01.321 "driver_specific": {} 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.321 "name": "Existed_Raid", 00:20:01.321 "uuid": "5874dd29-ea64-4d11-901e-2fc989ba8b41", 00:20:01.321 "strip_size_kb": 0, 00:20:01.321 "state": "configuring", 00:20:01.321 "raid_level": "raid1", 00:20:01.321 "superblock": true, 00:20:01.321 "num_base_bdevs": 2, 00:20:01.321 "num_base_bdevs_discovered": 1, 00:20:01.321 "num_base_bdevs_operational": 2, 00:20:01.321 "base_bdevs_list": [ 00:20:01.321 { 00:20:01.321 "name": "BaseBdev1", 00:20:01.321 "uuid": "4bcf8cec-2d0e-44f8-b268-caf46ea916df", 00:20:01.321 "is_configured": true, 00:20:01.321 "data_offset": 256, 00:20:01.321 "data_size": 7936 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "name": "BaseBdev2", 00:20:01.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.321 "is_configured": false, 00:20:01.321 "data_offset": 0, 00:20:01.321 "data_size": 0 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }' 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.321 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.888 [2024-11-20 08:53:32.578504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.888 [2024-11-20 08:53:32.578756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.888 [2024-11-20 08:53:32.586573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.888 [2024-11-20 08:53:32.589212] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:01.888 [2024-11-20 08:53:32.589408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.888 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.889 "name": "Existed_Raid", 00:20:01.889 "uuid": "ada0ff95-55c4-494f-8425-fe38d12678bc", 00:20:01.889 "strip_size_kb": 0, 00:20:01.889 "state": "configuring", 00:20:01.889 "raid_level": "raid1", 00:20:01.889 "superblock": true, 00:20:01.889 "num_base_bdevs": 2, 00:20:01.889 "num_base_bdevs_discovered": 1, 00:20:01.889 "num_base_bdevs_operational": 2, 00:20:01.889 "base_bdevs_list": [ 00:20:01.889 { 00:20:01.889 "name": "BaseBdev1", 00:20:01.889 "uuid": "4bcf8cec-2d0e-44f8-b268-caf46ea916df", 00:20:01.889 "is_configured": true, 00:20:01.889 "data_offset": 256, 00:20:01.889 "data_size": 7936 00:20:01.889 }, 00:20:01.889 { 00:20:01.889 "name": "BaseBdev2", 00:20:01.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.889 "is_configured": false, 00:20:01.889 "data_offset": 0, 00:20:01.889 "data_size": 0 00:20:01.889 } 00:20:01.889 ] 00:20:01.889 }' 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.889 08:53:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.456 [2024-11-20 08:53:33.137684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.456 BaseBdev2 00:20:02.456 [2024-11-20 08:53:33.138221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:02.456 [2024-11-20 08:53:33.138248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:02.456 [2024-11-20 08:53:33.138602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:02.456 [2024-11-20 08:53:33.138806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:02.456 [2024-11-20 08:53:33.138828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:02.456 [2024-11-20 08:53:33.139048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.456 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.456 [ 00:20:02.456 { 00:20:02.456 "name": "BaseBdev2", 00:20:02.456 "aliases": [ 00:20:02.456 "dc686571-3874-49d5-9ecc-b706c4cd46ac" 00:20:02.456 ], 00:20:02.456 "product_name": "Malloc disk", 00:20:02.456 "block_size": 4096, 00:20:02.456 "num_blocks": 8192, 00:20:02.456 "uuid": "dc686571-3874-49d5-9ecc-b706c4cd46ac", 00:20:02.456 "assigned_rate_limits": { 00:20:02.456 "rw_ios_per_sec": 0, 00:20:02.456 "rw_mbytes_per_sec": 0, 00:20:02.456 "r_mbytes_per_sec": 0, 00:20:02.456 "w_mbytes_per_sec": 0 00:20:02.456 }, 00:20:02.457 "claimed": true, 00:20:02.457 "claim_type": "exclusive_write", 00:20:02.457 "zoned": false, 00:20:02.457 "supported_io_types": { 00:20:02.457 "read": true, 00:20:02.457 "write": true, 00:20:02.457 "unmap": true, 00:20:02.457 "flush": true, 00:20:02.457 "reset": true, 00:20:02.457 "nvme_admin": false, 00:20:02.457 "nvme_io": false, 00:20:02.457 "nvme_io_md": false, 00:20:02.457 "write_zeroes": true, 00:20:02.457 "zcopy": true, 00:20:02.457 "get_zone_info": false, 00:20:02.457 "zone_management": false, 00:20:02.457 "zone_append": false, 00:20:02.457 "compare": false, 00:20:02.457 "compare_and_write": false, 00:20:02.457 "abort": true, 00:20:02.457 "seek_hole": false, 00:20:02.457 "seek_data": false, 00:20:02.457 "copy": true, 00:20:02.457 "nvme_iov_md": false 00:20:02.457 }, 00:20:02.457 "memory_domains": [ 00:20:02.457 { 00:20:02.457 "dma_device_id": "system", 00:20:02.457 "dma_device_type": 1 00:20:02.457 }, 00:20:02.457 { 00:20:02.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.457 "dma_device_type": 2 00:20:02.457 } 00:20:02.457 ], 00:20:02.457 "driver_specific": {} 00:20:02.457 } 00:20:02.457 ] 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.457 "name": "Existed_Raid", 00:20:02.457 "uuid": "ada0ff95-55c4-494f-8425-fe38d12678bc", 00:20:02.457 "strip_size_kb": 0, 00:20:02.457 "state": "online", 00:20:02.457 "raid_level": "raid1", 00:20:02.457 "superblock": true, 00:20:02.457 "num_base_bdevs": 2, 00:20:02.457 "num_base_bdevs_discovered": 2, 00:20:02.457 "num_base_bdevs_operational": 2, 00:20:02.457 "base_bdevs_list": [ 00:20:02.457 { 00:20:02.457 "name": "BaseBdev1", 00:20:02.457 "uuid": "4bcf8cec-2d0e-44f8-b268-caf46ea916df", 00:20:02.457 "is_configured": true, 00:20:02.457 "data_offset": 256, 00:20:02.457 "data_size": 7936 00:20:02.457 }, 00:20:02.457 { 00:20:02.457 "name": "BaseBdev2", 00:20:02.457 "uuid": "dc686571-3874-49d5-9ecc-b706c4cd46ac", 00:20:02.457 "is_configured": true, 00:20:02.457 "data_offset": 256, 00:20:02.457 "data_size": 7936 00:20:02.457 } 00:20:02.457 ] 00:20:02.457 }' 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.457 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.024 [2024-11-20 08:53:33.682274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:03.024 "name": "Existed_Raid", 00:20:03.024 "aliases": [ 00:20:03.024 "ada0ff95-55c4-494f-8425-fe38d12678bc" 00:20:03.024 ], 00:20:03.024 "product_name": "Raid Volume", 00:20:03.024 "block_size": 4096, 00:20:03.024 "num_blocks": 7936, 00:20:03.024 "uuid": "ada0ff95-55c4-494f-8425-fe38d12678bc", 00:20:03.024 "assigned_rate_limits": { 00:20:03.024 "rw_ios_per_sec": 0, 00:20:03.024 "rw_mbytes_per_sec": 0, 00:20:03.024 "r_mbytes_per_sec": 0, 00:20:03.024 "w_mbytes_per_sec": 0 00:20:03.024 }, 00:20:03.024 "claimed": false, 00:20:03.024 "zoned": false, 00:20:03.024 "supported_io_types": { 00:20:03.024 "read": true, 00:20:03.024 "write": true, 00:20:03.024 "unmap": false, 00:20:03.024 "flush": false, 00:20:03.024 "reset": true, 00:20:03.024 "nvme_admin": false, 00:20:03.024 "nvme_io": false, 00:20:03.024 "nvme_io_md": false, 00:20:03.024 "write_zeroes": true, 00:20:03.024 "zcopy": false, 00:20:03.024 "get_zone_info": false, 00:20:03.024 "zone_management": false, 00:20:03.024 "zone_append": false, 00:20:03.024 "compare": false, 00:20:03.024 "compare_and_write": false, 00:20:03.024 "abort": false, 00:20:03.024 "seek_hole": false, 00:20:03.024 "seek_data": false, 00:20:03.024 "copy": false, 00:20:03.024 "nvme_iov_md": false 00:20:03.024 }, 00:20:03.024 "memory_domains": [ 00:20:03.024 { 00:20:03.024 "dma_device_id": "system", 00:20:03.024 "dma_device_type": 1 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.024 "dma_device_type": 2 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "dma_device_id": "system", 00:20:03.024 "dma_device_type": 1 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.024 "dma_device_type": 2 00:20:03.024 } 00:20:03.024 ], 00:20:03.024 "driver_specific": { 00:20:03.024 "raid": { 00:20:03.024 "uuid": "ada0ff95-55c4-494f-8425-fe38d12678bc", 00:20:03.024 "strip_size_kb": 0, 00:20:03.024 "state": "online", 00:20:03.024 "raid_level": "raid1", 00:20:03.024 "superblock": true, 00:20:03.024 "num_base_bdevs": 2, 00:20:03.024 "num_base_bdevs_discovered": 2, 00:20:03.024 "num_base_bdevs_operational": 2, 00:20:03.024 "base_bdevs_list": [ 00:20:03.024 { 00:20:03.024 "name": "BaseBdev1", 00:20:03.024 "uuid": "4bcf8cec-2d0e-44f8-b268-caf46ea916df", 00:20:03.024 "is_configured": true, 00:20:03.024 "data_offset": 256, 00:20:03.024 "data_size": 7936 00:20:03.024 }, 00:20:03.024 { 00:20:03.024 "name": "BaseBdev2", 00:20:03.024 "uuid": "dc686571-3874-49d5-9ecc-b706c4cd46ac", 00:20:03.024 "is_configured": true, 00:20:03.024 "data_offset": 256, 00:20:03.024 "data_size": 7936 00:20:03.024 } 00:20:03.024 ] 00:20:03.024 } 00:20:03.024 } 00:20:03.024 }' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:03.024 BaseBdev2' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:03.024 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.025 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:03.025 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.025 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.025 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.025 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.283 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:03.283 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:03.283 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:03.283 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.283 08:53:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.283 [2024-11-20 08:53:33.953971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.283 "name": "Existed_Raid", 00:20:03.283 "uuid": "ada0ff95-55c4-494f-8425-fe38d12678bc", 00:20:03.283 "strip_size_kb": 0, 00:20:03.283 "state": "online", 00:20:03.283 "raid_level": "raid1", 00:20:03.283 "superblock": true, 00:20:03.283 "num_base_bdevs": 2, 00:20:03.283 "num_base_bdevs_discovered": 1, 00:20:03.283 "num_base_bdevs_operational": 1, 00:20:03.283 "base_bdevs_list": [ 00:20:03.283 { 00:20:03.283 "name": null, 00:20:03.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.283 "is_configured": false, 00:20:03.283 "data_offset": 0, 00:20:03.283 "data_size": 7936 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "name": "BaseBdev2", 00:20:03.283 "uuid": "dc686571-3874-49d5-9ecc-b706c4cd46ac", 00:20:03.283 "is_configured": true, 00:20:03.283 "data_offset": 256, 00:20:03.283 "data_size": 7936 00:20:03.283 } 00:20:03.283 ] 00:20:03.283 }' 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.283 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.850 [2024-11-20 08:53:34.605971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:03.850 [2024-11-20 08:53:34.606295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.850 [2024-11-20 08:53:34.689122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.850 [2024-11-20 08:53:34.689402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.850 [2024-11-20 08:53:34.689438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86337 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86337 ']' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86337 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.850 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86337 00:20:04.110 killing process with pid 86337 00:20:04.110 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.110 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.110 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86337' 00:20:04.110 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86337 00:20:04.110 [2024-11-20 08:53:34.771057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.110 08:53:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86337 00:20:04.110 [2024-11-20 08:53:34.786021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.047 08:53:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:05.047 00:20:05.047 real 0m5.422s 00:20:05.047 user 0m8.244s 00:20:05.047 sys 0m0.761s 00:20:05.047 08:53:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.047 ************************************ 00:20:05.047 END TEST raid_state_function_test_sb_4k 00:20:05.047 ************************************ 00:20:05.047 08:53:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.047 08:53:35 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:05.047 08:53:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:05.047 08:53:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.047 08:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.047 ************************************ 00:20:05.047 START TEST raid_superblock_test_4k 00:20:05.047 ************************************ 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86589 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86589 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86589 ']' 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.047 08:53:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:05.047 [2024-11-20 08:53:35.956618] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:05.047 [2024-11-20 08:53:35.957021] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86589 ] 00:20:05.305 [2024-11-20 08:53:36.142281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.565 [2024-11-20 08:53:36.266850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.565 [2024-11-20 08:53:36.468981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.565 [2024-11-20 08:53:36.469046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.195 malloc1 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.195 [2024-11-20 08:53:36.991775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.195 [2024-11-20 08:53:36.992049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.195 [2024-11-20 08:53:36.992163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:06.195 [2024-11-20 08:53:36.992403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.195 [2024-11-20 08:53:36.995249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.195 [2024-11-20 08:53:36.995419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.195 pt1 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.195 08:53:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.195 malloc2 00:20:06.195 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.195 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.195 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.195 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.195 [2024-11-20 08:53:37.040475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.195 [2024-11-20 08:53:37.040531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.196 [2024-11-20 08:53:37.040563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:06.196 [2024-11-20 08:53:37.040589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.196 [2024-11-20 08:53:37.043442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.196 [2024-11-20 08:53:37.043489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.196 pt2 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.196 [2024-11-20 08:53:37.048558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:06.196 [2024-11-20 08:53:37.051076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.196 [2024-11-20 08:53:37.051338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:06.196 [2024-11-20 08:53:37.051364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:06.196 [2024-11-20 08:53:37.051674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:06.196 [2024-11-20 08:53:37.051912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:06.196 [2024-11-20 08:53:37.051937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:06.196 [2024-11-20 08:53:37.052104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.196 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.454 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.454 "name": "raid_bdev1", 00:20:06.454 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:06.454 "strip_size_kb": 0, 00:20:06.454 "state": "online", 00:20:06.454 "raid_level": "raid1", 00:20:06.454 "superblock": true, 00:20:06.454 "num_base_bdevs": 2, 00:20:06.454 "num_base_bdevs_discovered": 2, 00:20:06.454 "num_base_bdevs_operational": 2, 00:20:06.454 "base_bdevs_list": [ 00:20:06.454 { 00:20:06.454 "name": "pt1", 00:20:06.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.454 "is_configured": true, 00:20:06.454 "data_offset": 256, 00:20:06.454 "data_size": 7936 00:20:06.454 }, 00:20:06.454 { 00:20:06.454 "name": "pt2", 00:20:06.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.454 "is_configured": true, 00:20:06.454 "data_offset": 256, 00:20:06.454 "data_size": 7936 00:20:06.454 } 00:20:06.454 ] 00:20:06.454 }' 00:20:06.454 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.454 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.713 [2024-11-20 08:53:37.593007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.713 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.972 "name": "raid_bdev1", 00:20:06.972 "aliases": [ 00:20:06.972 "4173c5ef-6532-4ddf-bd00-5808829885f7" 00:20:06.972 ], 00:20:06.972 "product_name": "Raid Volume", 00:20:06.972 "block_size": 4096, 00:20:06.972 "num_blocks": 7936, 00:20:06.972 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:06.972 "assigned_rate_limits": { 00:20:06.972 "rw_ios_per_sec": 0, 00:20:06.972 "rw_mbytes_per_sec": 0, 00:20:06.972 "r_mbytes_per_sec": 0, 00:20:06.972 "w_mbytes_per_sec": 0 00:20:06.972 }, 00:20:06.972 "claimed": false, 00:20:06.972 "zoned": false, 00:20:06.972 "supported_io_types": { 00:20:06.972 "read": true, 00:20:06.972 "write": true, 00:20:06.972 "unmap": false, 00:20:06.972 "flush": false, 00:20:06.972 "reset": true, 00:20:06.972 "nvme_admin": false, 00:20:06.972 "nvme_io": false, 00:20:06.972 "nvme_io_md": false, 00:20:06.972 "write_zeroes": true, 00:20:06.972 "zcopy": false, 00:20:06.972 "get_zone_info": false, 00:20:06.972 "zone_management": false, 00:20:06.972 "zone_append": false, 00:20:06.972 "compare": false, 00:20:06.972 "compare_and_write": false, 00:20:06.972 "abort": false, 00:20:06.972 "seek_hole": false, 00:20:06.972 "seek_data": false, 00:20:06.972 "copy": false, 00:20:06.972 "nvme_iov_md": false 00:20:06.972 }, 00:20:06.972 "memory_domains": [ 00:20:06.972 { 00:20:06.972 "dma_device_id": "system", 00:20:06.972 "dma_device_type": 1 00:20:06.972 }, 00:20:06.972 { 00:20:06.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.972 "dma_device_type": 2 00:20:06.972 }, 00:20:06.972 { 00:20:06.972 "dma_device_id": "system", 00:20:06.972 "dma_device_type": 1 00:20:06.972 }, 00:20:06.972 { 00:20:06.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.972 "dma_device_type": 2 00:20:06.972 } 00:20:06.972 ], 00:20:06.972 "driver_specific": { 00:20:06.972 "raid": { 00:20:06.972 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:06.972 "strip_size_kb": 0, 00:20:06.972 "state": "online", 00:20:06.972 "raid_level": "raid1", 00:20:06.972 "superblock": true, 00:20:06.972 "num_base_bdevs": 2, 00:20:06.972 "num_base_bdevs_discovered": 2, 00:20:06.972 "num_base_bdevs_operational": 2, 00:20:06.972 "base_bdevs_list": [ 00:20:06.972 { 00:20:06.972 "name": "pt1", 00:20:06.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.972 "is_configured": true, 00:20:06.972 "data_offset": 256, 00:20:06.972 "data_size": 7936 00:20:06.972 }, 00:20:06.972 { 00:20:06.972 "name": "pt2", 00:20:06.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.972 "is_configured": true, 00:20:06.972 "data_offset": 256, 00:20:06.972 "data_size": 7936 00:20:06.972 } 00:20:06.972 ] 00:20:06.972 } 00:20:06.972 } 00:20:06.972 }' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:06.972 pt2' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.972 [2024-11-20 08:53:37.853093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.972 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4173c5ef-6532-4ddf-bd00-5808829885f7 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 4173c5ef-6532-4ddf-bd00-5808829885f7 ']' 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 [2024-11-20 08:53:37.904711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.231 [2024-11-20 08:53:37.904925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.231 [2024-11-20 08:53:37.905185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.231 [2024-11-20 08:53:37.905272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.231 [2024-11-20 08:53:37.905295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 08:53:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 [2024-11-20 08:53:38.044802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:07.231 [2024-11-20 08:53:38.047384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:07.231 [2024-11-20 08:53:38.047594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:07.231 [2024-11-20 08:53:38.047682] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:07.231 [2024-11-20 08:53:38.047710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.231 [2024-11-20 08:53:38.047726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:07.231 request: 00:20:07.231 { 00:20:07.231 "name": "raid_bdev1", 00:20:07.231 "raid_level": "raid1", 00:20:07.231 "base_bdevs": [ 00:20:07.231 "malloc1", 00:20:07.231 "malloc2" 00:20:07.231 ], 00:20:07.231 "superblock": false, 00:20:07.231 "method": "bdev_raid_create", 00:20:07.231 "req_id": 1 00:20:07.231 } 00:20:07.231 Got JSON-RPC error response 00:20:07.231 response: 00:20:07.231 { 00:20:07.231 "code": -17, 00:20:07.231 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:07.231 } 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 [2024-11-20 08:53:38.112804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.231 [2024-11-20 08:53:38.113030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.231 [2024-11-20 08:53:38.113100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:07.231 [2024-11-20 08:53:38.113346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.231 [2024-11-20 08:53:38.116486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.232 [2024-11-20 08:53:38.116662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.232 [2024-11-20 08:53:38.116864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:07.232 [2024-11-20 08:53:38.117052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.232 pt1 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.232 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.489 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.489 "name": "raid_bdev1", 00:20:07.489 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:07.489 "strip_size_kb": 0, 00:20:07.489 "state": "configuring", 00:20:07.489 "raid_level": "raid1", 00:20:07.489 "superblock": true, 00:20:07.489 "num_base_bdevs": 2, 00:20:07.489 "num_base_bdevs_discovered": 1, 00:20:07.489 "num_base_bdevs_operational": 2, 00:20:07.489 "base_bdevs_list": [ 00:20:07.489 { 00:20:07.489 "name": "pt1", 00:20:07.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.489 "is_configured": true, 00:20:07.489 "data_offset": 256, 00:20:07.489 "data_size": 7936 00:20:07.489 }, 00:20:07.489 { 00:20:07.489 "name": null, 00:20:07.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.489 "is_configured": false, 00:20:07.489 "data_offset": 256, 00:20:07.489 "data_size": 7936 00:20:07.489 } 00:20:07.489 ] 00:20:07.489 }' 00:20:07.489 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.489 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.748 [2024-11-20 08:53:38.641106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:07.748 [2024-11-20 08:53:38.641398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.748 [2024-11-20 08:53:38.641474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:07.748 [2024-11-20 08:53:38.641723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.748 [2024-11-20 08:53:38.642468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.748 [2024-11-20 08:53:38.642575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:07.748 [2024-11-20 08:53:38.642669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:07.748 [2024-11-20 08:53:38.642705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.748 [2024-11-20 08:53:38.642844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:07.748 [2024-11-20 08:53:38.642865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:07.748 [2024-11-20 08:53:38.643182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:07.748 [2024-11-20 08:53:38.643413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:07.748 [2024-11-20 08:53:38.643430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:07.748 [2024-11-20 08:53:38.643638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.748 pt2 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.748 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.007 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.007 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.007 "name": "raid_bdev1", 00:20:08.007 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:08.007 "strip_size_kb": 0, 00:20:08.007 "state": "online", 00:20:08.007 "raid_level": "raid1", 00:20:08.007 "superblock": true, 00:20:08.007 "num_base_bdevs": 2, 00:20:08.007 "num_base_bdevs_discovered": 2, 00:20:08.007 "num_base_bdevs_operational": 2, 00:20:08.007 "base_bdevs_list": [ 00:20:08.007 { 00:20:08.007 "name": "pt1", 00:20:08.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.007 "is_configured": true, 00:20:08.007 "data_offset": 256, 00:20:08.007 "data_size": 7936 00:20:08.007 }, 00:20:08.007 { 00:20:08.007 "name": "pt2", 00:20:08.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.007 "is_configured": true, 00:20:08.007 "data_offset": 256, 00:20:08.007 "data_size": 7936 00:20:08.007 } 00:20:08.007 ] 00:20:08.007 }' 00:20:08.007 08:53:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.007 08:53:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.264 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:08.264 [2024-11-20 08:53:39.177671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.521 "name": "raid_bdev1", 00:20:08.521 "aliases": [ 00:20:08.521 "4173c5ef-6532-4ddf-bd00-5808829885f7" 00:20:08.521 ], 00:20:08.521 "product_name": "Raid Volume", 00:20:08.521 "block_size": 4096, 00:20:08.521 "num_blocks": 7936, 00:20:08.521 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:08.521 "assigned_rate_limits": { 00:20:08.521 "rw_ios_per_sec": 0, 00:20:08.521 "rw_mbytes_per_sec": 0, 00:20:08.521 "r_mbytes_per_sec": 0, 00:20:08.521 "w_mbytes_per_sec": 0 00:20:08.521 }, 00:20:08.521 "claimed": false, 00:20:08.521 "zoned": false, 00:20:08.521 "supported_io_types": { 00:20:08.521 "read": true, 00:20:08.521 "write": true, 00:20:08.521 "unmap": false, 00:20:08.521 "flush": false, 00:20:08.521 "reset": true, 00:20:08.521 "nvme_admin": false, 00:20:08.521 "nvme_io": false, 00:20:08.521 "nvme_io_md": false, 00:20:08.521 "write_zeroes": true, 00:20:08.521 "zcopy": false, 00:20:08.521 "get_zone_info": false, 00:20:08.521 "zone_management": false, 00:20:08.521 "zone_append": false, 00:20:08.521 "compare": false, 00:20:08.521 "compare_and_write": false, 00:20:08.521 "abort": false, 00:20:08.521 "seek_hole": false, 00:20:08.521 "seek_data": false, 00:20:08.521 "copy": false, 00:20:08.521 "nvme_iov_md": false 00:20:08.521 }, 00:20:08.521 "memory_domains": [ 00:20:08.521 { 00:20:08.521 "dma_device_id": "system", 00:20:08.521 "dma_device_type": 1 00:20:08.521 }, 00:20:08.521 { 00:20:08.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.521 "dma_device_type": 2 00:20:08.521 }, 00:20:08.521 { 00:20:08.521 "dma_device_id": "system", 00:20:08.521 "dma_device_type": 1 00:20:08.521 }, 00:20:08.521 { 00:20:08.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.521 "dma_device_type": 2 00:20:08.521 } 00:20:08.521 ], 00:20:08.521 "driver_specific": { 00:20:08.521 "raid": { 00:20:08.521 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:08.521 "strip_size_kb": 0, 00:20:08.521 "state": "online", 00:20:08.521 "raid_level": "raid1", 00:20:08.521 "superblock": true, 00:20:08.521 "num_base_bdevs": 2, 00:20:08.521 "num_base_bdevs_discovered": 2, 00:20:08.521 "num_base_bdevs_operational": 2, 00:20:08.521 "base_bdevs_list": [ 00:20:08.521 { 00:20:08.521 "name": "pt1", 00:20:08.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.521 "is_configured": true, 00:20:08.521 "data_offset": 256, 00:20:08.521 "data_size": 7936 00:20:08.521 }, 00:20:08.521 { 00:20:08.521 "name": "pt2", 00:20:08.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.521 "is_configured": true, 00:20:08.521 "data_offset": 256, 00:20:08.521 "data_size": 7936 00:20:08.521 } 00:20:08.521 ] 00:20:08.521 } 00:20:08.521 } 00:20:08.521 }' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:08.521 pt2' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.521 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:08.779 [2024-11-20 08:53:39.445705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 4173c5ef-6532-4ddf-bd00-5808829885f7 '!=' 4173c5ef-6532-4ddf-bd00-5808829885f7 ']' 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.779 [2024-11-20 08:53:39.485429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.779 "name": "raid_bdev1", 00:20:08.779 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:08.779 "strip_size_kb": 0, 00:20:08.779 "state": "online", 00:20:08.779 "raid_level": "raid1", 00:20:08.779 "superblock": true, 00:20:08.779 "num_base_bdevs": 2, 00:20:08.779 "num_base_bdevs_discovered": 1, 00:20:08.779 "num_base_bdevs_operational": 1, 00:20:08.779 "base_bdevs_list": [ 00:20:08.779 { 00:20:08.779 "name": null, 00:20:08.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.779 "is_configured": false, 00:20:08.779 "data_offset": 0, 00:20:08.779 "data_size": 7936 00:20:08.779 }, 00:20:08.779 { 00:20:08.779 "name": "pt2", 00:20:08.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.779 "is_configured": true, 00:20:08.779 "data_offset": 256, 00:20:08.779 "data_size": 7936 00:20:08.779 } 00:20:08.779 ] 00:20:08.779 }' 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.779 08:53:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 [2024-11-20 08:53:40.013696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.346 [2024-11-20 08:53:40.013730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.346 [2024-11-20 08:53:40.013824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.346 [2024-11-20 08:53:40.013886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.346 [2024-11-20 08:53:40.013905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:09.346 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 [2024-11-20 08:53:40.089620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.347 [2024-11-20 08:53:40.089851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.347 [2024-11-20 08:53:40.090011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:09.347 [2024-11-20 08:53:40.090141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.347 [2024-11-20 08:53:40.093234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.347 [2024-11-20 08:53:40.093410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.347 [2024-11-20 08:53:40.093646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:09.347 [2024-11-20 08:53:40.093824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.347 [2024-11-20 08:53:40.094136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:09.347 [2024-11-20 08:53:40.094286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:09.347 pt2 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 [2024-11-20 08:53:40.094742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:09.347 [2024-11-20 08:53:40.094961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.347 [2024-11-20 08:53:40.095164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.347 [2024-11-20 08:53:40.095373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.347 "name": "raid_bdev1", 00:20:09.347 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:09.347 "strip_size_kb": 0, 00:20:09.347 "state": "online", 00:20:09.347 "raid_level": "raid1", 00:20:09.347 "superblock": true, 00:20:09.347 "num_base_bdevs": 2, 00:20:09.347 "num_base_bdevs_discovered": 1, 00:20:09.347 "num_base_bdevs_operational": 1, 00:20:09.347 "base_bdevs_list": [ 00:20:09.347 { 00:20:09.347 "name": null, 00:20:09.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.347 "is_configured": false, 00:20:09.347 "data_offset": 256, 00:20:09.347 "data_size": 7936 00:20:09.347 }, 00:20:09.347 { 00:20:09.347 "name": "pt2", 00:20:09.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.347 "is_configured": true, 00:20:09.347 "data_offset": 256, 00:20:09.347 "data_size": 7936 00:20:09.347 } 00:20:09.347 ] 00:20:09.347 }' 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.347 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.917 [2024-11-20 08:53:40.629872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.917 [2024-11-20 08:53:40.629909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.917 [2024-11-20 08:53:40.630030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.917 [2024-11-20 08:53:40.630094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.917 [2024-11-20 08:53:40.630124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.917 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.917 [2024-11-20 08:53:40.693879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:09.917 [2024-11-20 08:53:40.694126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.917 [2024-11-20 08:53:40.694247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:09.917 [2024-11-20 08:53:40.694459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.917 [2024-11-20 08:53:40.697564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.918 [2024-11-20 08:53:40.697770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:09.918 [2024-11-20 08:53:40.697903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:09.918 [2024-11-20 08:53:40.697965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:09.918 [2024-11-20 08:53:40.698288] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:09.918 [2024-11-20 08:53:40.698308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.918 pt1 00:20:09.918 [2024-11-20 08:53:40.698330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:09.918 [2024-11-20 08:53:40.698412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.918 [2024-11-20 08:53:40.698541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:09.918 [2024-11-20 08:53:40.698567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:09.918 [2024-11-20 08:53:40.698890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.918 [2024-11-20 08:53:40.699115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:09.918 [2024-11-20 08:53:40.699136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:09.918 [2024-11-20 08:53:40.699369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.918 "name": "raid_bdev1", 00:20:09.918 "uuid": "4173c5ef-6532-4ddf-bd00-5808829885f7", 00:20:09.918 "strip_size_kb": 0, 00:20:09.918 "state": "online", 00:20:09.918 "raid_level": "raid1", 00:20:09.918 "superblock": true, 00:20:09.918 "num_base_bdevs": 2, 00:20:09.918 "num_base_bdevs_discovered": 1, 00:20:09.918 "num_base_bdevs_operational": 1, 00:20:09.918 "base_bdevs_list": [ 00:20:09.918 { 00:20:09.918 "name": null, 00:20:09.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.918 "is_configured": false, 00:20:09.918 "data_offset": 256, 00:20:09.918 "data_size": 7936 00:20:09.918 }, 00:20:09.918 { 00:20:09.918 "name": "pt2", 00:20:09.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.918 "is_configured": true, 00:20:09.918 "data_offset": 256, 00:20:09.918 "data_size": 7936 00:20:09.918 } 00:20:09.918 ] 00:20:09.918 }' 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.918 08:53:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:10.485 [2024-11-20 08:53:41.290455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 4173c5ef-6532-4ddf-bd00-5808829885f7 '!=' 4173c5ef-6532-4ddf-bd00-5808829885f7 ']' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86589 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86589 ']' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86589 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86589 00:20:10.485 killing process with pid 86589 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86589' 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86589 00:20:10.485 [2024-11-20 08:53:41.385027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:10.485 08:53:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86589 00:20:10.485 [2024-11-20 08:53:41.385134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.485 [2024-11-20 08:53:41.385233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.485 [2024-11-20 08:53:41.385258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:10.743 [2024-11-20 08:53:41.577841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:12.124 08:53:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:12.124 00:20:12.124 real 0m6.766s 00:20:12.124 user 0m10.780s 00:20:12.124 sys 0m0.946s 00:20:12.124 08:53:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.124 08:53:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.124 ************************************ 00:20:12.124 END TEST raid_superblock_test_4k 00:20:12.124 ************************************ 00:20:12.124 08:53:42 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:12.124 08:53:42 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:12.124 08:53:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:12.124 08:53:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.124 08:53:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.124 ************************************ 00:20:12.124 START TEST raid_rebuild_test_sb_4k 00:20:12.124 ************************************ 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86922 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86922 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86922 ']' 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.124 08:53:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.124 [2024-11-20 08:53:42.774018] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:12.124 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:12.124 Zero copy mechanism will not be used. 00:20:12.124 [2024-11-20 08:53:42.774369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86922 ] 00:20:12.124 [2024-11-20 08:53:42.950981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.380 [2024-11-20 08:53:43.085466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.380 [2024-11-20 08:53:43.291624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.380 [2024-11-20 08:53:43.291701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.947 BaseBdev1_malloc 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:12.947 [2024-11-20 08:53:43.837289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:12.947 [2024-11-20 08:53:43.837518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.947 [2024-11-20 08:53:43.837598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:12.947 [2024-11-20 08:53:43.837802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.947 [2024-11-20 08:53:43.840728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.947 BaseBdev1 00:20:12.947 [2024-11-20 08:53:43.840909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.947 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.205 BaseBdev2_malloc 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.205 [2024-11-20 08:53:43.889816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:13.205 [2024-11-20 08:53:43.889891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.205 [2024-11-20 08:53:43.889924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:13.205 [2024-11-20 08:53:43.889944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.205 [2024-11-20 08:53:43.892683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.205 [2024-11-20 08:53:43.892736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:13.205 BaseBdev2 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.205 spare_malloc 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.205 spare_delay 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.205 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.205 [2024-11-20 08:53:43.959532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:13.205 [2024-11-20 08:53:43.959611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.205 [2024-11-20 08:53:43.959643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:13.205 [2024-11-20 08:53:43.959663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.206 [2024-11-20 08:53:43.962545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.206 [2024-11-20 08:53:43.962599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:13.206 spare 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.206 [2024-11-20 08:53:43.967601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.206 [2024-11-20 08:53:43.970205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.206 [2024-11-20 08:53:43.970583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:13.206 [2024-11-20 08:53:43.970728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:13.206 [2024-11-20 08:53:43.971076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:13.206 [2024-11-20 08:53:43.971347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:13.206 [2024-11-20 08:53:43.971369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:13.206 [2024-11-20 08:53:43.971611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.206 08:53:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.206 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.206 "name": "raid_bdev1", 00:20:13.206 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:13.206 "strip_size_kb": 0, 00:20:13.206 "state": "online", 00:20:13.206 "raid_level": "raid1", 00:20:13.206 "superblock": true, 00:20:13.206 "num_base_bdevs": 2, 00:20:13.206 "num_base_bdevs_discovered": 2, 00:20:13.206 "num_base_bdevs_operational": 2, 00:20:13.206 "base_bdevs_list": [ 00:20:13.206 { 00:20:13.206 "name": "BaseBdev1", 00:20:13.206 "uuid": "30d4bf93-dd49-5ce9-a67d-f82bad526ad8", 00:20:13.206 "is_configured": true, 00:20:13.206 "data_offset": 256, 00:20:13.206 "data_size": 7936 00:20:13.206 }, 00:20:13.206 { 00:20:13.206 "name": "BaseBdev2", 00:20:13.206 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:13.206 "is_configured": true, 00:20:13.206 "data_offset": 256, 00:20:13.206 "data_size": 7936 00:20:13.206 } 00:20:13.206 ] 00:20:13.206 }' 00:20:13.206 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.206 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.774 [2024-11-20 08:53:44.484255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:13.774 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:14.032 [2024-11-20 08:53:44.832024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:14.032 /dev/nbd0 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.032 1+0 records in 00:20:14.032 1+0 records out 00:20:14.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031859 s, 12.9 MB/s 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:14.032 08:53:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:14.997 7936+0 records in 00:20:14.997 7936+0 records out 00:20:14.997 32505856 bytes (33 MB, 31 MiB) copied, 0.875987 s, 37.1 MB/s 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.997 08:53:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:15.255 [2024-11-20 08:53:46.065901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.255 [2024-11-20 08:53:46.081591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.255 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.256 "name": "raid_bdev1", 00:20:15.256 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:15.256 "strip_size_kb": 0, 00:20:15.256 "state": "online", 00:20:15.256 "raid_level": "raid1", 00:20:15.256 "superblock": true, 00:20:15.256 "num_base_bdevs": 2, 00:20:15.256 "num_base_bdevs_discovered": 1, 00:20:15.256 "num_base_bdevs_operational": 1, 00:20:15.256 "base_bdevs_list": [ 00:20:15.256 { 00:20:15.256 "name": null, 00:20:15.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.256 "is_configured": false, 00:20:15.256 "data_offset": 0, 00:20:15.256 "data_size": 7936 00:20:15.256 }, 00:20:15.256 { 00:20:15.256 "name": "BaseBdev2", 00:20:15.256 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:15.256 "is_configured": true, 00:20:15.256 "data_offset": 256, 00:20:15.256 "data_size": 7936 00:20:15.256 } 00:20:15.256 ] 00:20:15.256 }' 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.256 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.835 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.835 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.835 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:15.835 [2024-11-20 08:53:46.577818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.835 [2024-11-20 08:53:46.594462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:15.835 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.835 08:53:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:15.835 [2024-11-20 08:53:46.597379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.772 "name": "raid_bdev1", 00:20:16.772 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:16.772 "strip_size_kb": 0, 00:20:16.772 "state": "online", 00:20:16.772 "raid_level": "raid1", 00:20:16.772 "superblock": true, 00:20:16.772 "num_base_bdevs": 2, 00:20:16.772 "num_base_bdevs_discovered": 2, 00:20:16.772 "num_base_bdevs_operational": 2, 00:20:16.772 "process": { 00:20:16.772 "type": "rebuild", 00:20:16.772 "target": "spare", 00:20:16.772 "progress": { 00:20:16.772 "blocks": 2560, 00:20:16.772 "percent": 32 00:20:16.772 } 00:20:16.772 }, 00:20:16.772 "base_bdevs_list": [ 00:20:16.772 { 00:20:16.772 "name": "spare", 00:20:16.772 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:16.772 "is_configured": true, 00:20:16.772 "data_offset": 256, 00:20:16.772 "data_size": 7936 00:20:16.772 }, 00:20:16.772 { 00:20:16.772 "name": "BaseBdev2", 00:20:16.772 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:16.772 "is_configured": true, 00:20:16.772 "data_offset": 256, 00:20:16.772 "data_size": 7936 00:20:16.772 } 00:20:16.772 ] 00:20:16.772 }' 00:20:16.772 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.030 [2024-11-20 08:53:47.770643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.030 [2024-11-20 08:53:47.806570] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:17.030 [2024-11-20 08:53:47.806925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.030 [2024-11-20 08:53:47.806955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.030 [2024-11-20 08:53:47.806972] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.030 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.030 "name": "raid_bdev1", 00:20:17.030 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:17.030 "strip_size_kb": 0, 00:20:17.030 "state": "online", 00:20:17.030 "raid_level": "raid1", 00:20:17.030 "superblock": true, 00:20:17.030 "num_base_bdevs": 2, 00:20:17.030 "num_base_bdevs_discovered": 1, 00:20:17.030 "num_base_bdevs_operational": 1, 00:20:17.030 "base_bdevs_list": [ 00:20:17.031 { 00:20:17.031 "name": null, 00:20:17.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.031 "is_configured": false, 00:20:17.031 "data_offset": 0, 00:20:17.031 "data_size": 7936 00:20:17.031 }, 00:20:17.031 { 00:20:17.031 "name": "BaseBdev2", 00:20:17.031 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:17.031 "is_configured": true, 00:20:17.031 "data_offset": 256, 00:20:17.031 "data_size": 7936 00:20:17.031 } 00:20:17.031 ] 00:20:17.031 }' 00:20:17.031 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.031 08:53:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.599 "name": "raid_bdev1", 00:20:17.599 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:17.599 "strip_size_kb": 0, 00:20:17.599 "state": "online", 00:20:17.599 "raid_level": "raid1", 00:20:17.599 "superblock": true, 00:20:17.599 "num_base_bdevs": 2, 00:20:17.599 "num_base_bdevs_discovered": 1, 00:20:17.599 "num_base_bdevs_operational": 1, 00:20:17.599 "base_bdevs_list": [ 00:20:17.599 { 00:20:17.599 "name": null, 00:20:17.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.599 "is_configured": false, 00:20:17.599 "data_offset": 0, 00:20:17.599 "data_size": 7936 00:20:17.599 }, 00:20:17.599 { 00:20:17.599 "name": "BaseBdev2", 00:20:17.599 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:17.599 "is_configured": true, 00:20:17.599 "data_offset": 256, 00:20:17.599 "data_size": 7936 00:20:17.599 } 00:20:17.599 ] 00:20:17.599 }' 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.599 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.859 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.859 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.859 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.859 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:17.859 [2024-11-20 08:53:48.539361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.859 [2024-11-20 08:53:48.555497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:17.859 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.859 08:53:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:17.859 [2024-11-20 08:53:48.558091] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.796 "name": "raid_bdev1", 00:20:18.796 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:18.796 "strip_size_kb": 0, 00:20:18.796 "state": "online", 00:20:18.796 "raid_level": "raid1", 00:20:18.796 "superblock": true, 00:20:18.796 "num_base_bdevs": 2, 00:20:18.796 "num_base_bdevs_discovered": 2, 00:20:18.796 "num_base_bdevs_operational": 2, 00:20:18.796 "process": { 00:20:18.796 "type": "rebuild", 00:20:18.796 "target": "spare", 00:20:18.796 "progress": { 00:20:18.796 "blocks": 2560, 00:20:18.796 "percent": 32 00:20:18.796 } 00:20:18.796 }, 00:20:18.796 "base_bdevs_list": [ 00:20:18.796 { 00:20:18.796 "name": "spare", 00:20:18.796 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:18.796 "is_configured": true, 00:20:18.796 "data_offset": 256, 00:20:18.796 "data_size": 7936 00:20:18.796 }, 00:20:18.796 { 00:20:18.796 "name": "BaseBdev2", 00:20:18.796 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:18.796 "is_configured": true, 00:20:18.796 "data_offset": 256, 00:20:18.796 "data_size": 7936 00:20:18.796 } 00:20:18.796 ] 00:20:18.796 }' 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.796 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:19.055 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=730 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.055 "name": "raid_bdev1", 00:20:19.055 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:19.055 "strip_size_kb": 0, 00:20:19.055 "state": "online", 00:20:19.055 "raid_level": "raid1", 00:20:19.055 "superblock": true, 00:20:19.055 "num_base_bdevs": 2, 00:20:19.055 "num_base_bdevs_discovered": 2, 00:20:19.055 "num_base_bdevs_operational": 2, 00:20:19.055 "process": { 00:20:19.055 "type": "rebuild", 00:20:19.055 "target": "spare", 00:20:19.055 "progress": { 00:20:19.055 "blocks": 2816, 00:20:19.055 "percent": 35 00:20:19.055 } 00:20:19.055 }, 00:20:19.055 "base_bdevs_list": [ 00:20:19.055 { 00:20:19.055 "name": "spare", 00:20:19.055 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:19.055 "is_configured": true, 00:20:19.055 "data_offset": 256, 00:20:19.055 "data_size": 7936 00:20:19.055 }, 00:20:19.055 { 00:20:19.055 "name": "BaseBdev2", 00:20:19.055 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:19.055 "is_configured": true, 00:20:19.055 "data_offset": 256, 00:20:19.055 "data_size": 7936 00:20:19.055 } 00:20:19.055 ] 00:20:19.055 }' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.055 08:53:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.010 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.011 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.269 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.269 "name": "raid_bdev1", 00:20:20.269 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:20.269 "strip_size_kb": 0, 00:20:20.269 "state": "online", 00:20:20.269 "raid_level": "raid1", 00:20:20.269 "superblock": true, 00:20:20.269 "num_base_bdevs": 2, 00:20:20.269 "num_base_bdevs_discovered": 2, 00:20:20.269 "num_base_bdevs_operational": 2, 00:20:20.269 "process": { 00:20:20.269 "type": "rebuild", 00:20:20.269 "target": "spare", 00:20:20.269 "progress": { 00:20:20.269 "blocks": 5888, 00:20:20.270 "percent": 74 00:20:20.270 } 00:20:20.270 }, 00:20:20.270 "base_bdevs_list": [ 00:20:20.270 { 00:20:20.270 "name": "spare", 00:20:20.270 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:20.270 "is_configured": true, 00:20:20.270 "data_offset": 256, 00:20:20.270 "data_size": 7936 00:20:20.270 }, 00:20:20.270 { 00:20:20.270 "name": "BaseBdev2", 00:20:20.270 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:20.270 "is_configured": true, 00:20:20.270 "data_offset": 256, 00:20:20.270 "data_size": 7936 00:20:20.270 } 00:20:20.270 ] 00:20:20.270 }' 00:20:20.270 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.270 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.270 08:53:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.270 08:53:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.270 08:53:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.838 [2024-11-20 08:53:51.680331] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:20.838 [2024-11-20 08:53:51.680830] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:20.838 [2024-11-20 08:53:51.681037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.406 "name": "raid_bdev1", 00:20:21.406 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:21.406 "strip_size_kb": 0, 00:20:21.406 "state": "online", 00:20:21.406 "raid_level": "raid1", 00:20:21.406 "superblock": true, 00:20:21.406 "num_base_bdevs": 2, 00:20:21.406 "num_base_bdevs_discovered": 2, 00:20:21.406 "num_base_bdevs_operational": 2, 00:20:21.406 "base_bdevs_list": [ 00:20:21.406 { 00:20:21.406 "name": "spare", 00:20:21.406 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:21.406 "is_configured": true, 00:20:21.406 "data_offset": 256, 00:20:21.406 "data_size": 7936 00:20:21.406 }, 00:20:21.406 { 00:20:21.406 "name": "BaseBdev2", 00:20:21.406 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:21.406 "is_configured": true, 00:20:21.406 "data_offset": 256, 00:20:21.406 "data_size": 7936 00:20:21.406 } 00:20:21.406 ] 00:20:21.406 }' 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:21.406 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.407 "name": "raid_bdev1", 00:20:21.407 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:21.407 "strip_size_kb": 0, 00:20:21.407 "state": "online", 00:20:21.407 "raid_level": "raid1", 00:20:21.407 "superblock": true, 00:20:21.407 "num_base_bdevs": 2, 00:20:21.407 "num_base_bdevs_discovered": 2, 00:20:21.407 "num_base_bdevs_operational": 2, 00:20:21.407 "base_bdevs_list": [ 00:20:21.407 { 00:20:21.407 "name": "spare", 00:20:21.407 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:21.407 "is_configured": true, 00:20:21.407 "data_offset": 256, 00:20:21.407 "data_size": 7936 00:20:21.407 }, 00:20:21.407 { 00:20:21.407 "name": "BaseBdev2", 00:20:21.407 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:21.407 "is_configured": true, 00:20:21.407 "data_offset": 256, 00:20:21.407 "data_size": 7936 00:20:21.407 } 00:20:21.407 ] 00:20:21.407 }' 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.407 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.666 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.666 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.666 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.666 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.666 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.666 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.667 "name": "raid_bdev1", 00:20:21.667 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:21.667 "strip_size_kb": 0, 00:20:21.667 "state": "online", 00:20:21.667 "raid_level": "raid1", 00:20:21.667 "superblock": true, 00:20:21.667 "num_base_bdevs": 2, 00:20:21.667 "num_base_bdevs_discovered": 2, 00:20:21.667 "num_base_bdevs_operational": 2, 00:20:21.667 "base_bdevs_list": [ 00:20:21.667 { 00:20:21.667 "name": "spare", 00:20:21.667 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:21.667 "is_configured": true, 00:20:21.667 "data_offset": 256, 00:20:21.667 "data_size": 7936 00:20:21.667 }, 00:20:21.667 { 00:20:21.667 "name": "BaseBdev2", 00:20:21.667 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:21.667 "is_configured": true, 00:20:21.667 "data_offset": 256, 00:20:21.667 "data_size": 7936 00:20:21.667 } 00:20:21.667 ] 00:20:21.667 }' 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.667 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.235 [2024-11-20 08:53:52.896010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.235 [2024-11-20 08:53:52.896271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.235 [2024-11-20 08:53:52.896386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.235 [2024-11-20 08:53:52.896476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.235 [2024-11-20 08:53:52.896493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.235 08:53:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:22.494 /dev/nbd0 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.494 1+0 records in 00:20:22.494 1+0 records out 00:20:22.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247668 s, 16.5 MB/s 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.494 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:22.753 /dev/nbd1 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.753 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.013 1+0 records in 00:20:23.013 1+0 records out 00:20:23.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331808 s, 12.3 MB/s 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.013 08:53:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.272 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:23.532 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:23.532 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:23.532 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:23.532 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.791 [2024-11-20 08:53:54.464561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:23.791 [2024-11-20 08:53:54.464626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.791 [2024-11-20 08:53:54.464659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:23.791 [2024-11-20 08:53:54.464675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.791 [2024-11-20 08:53:54.467703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.791 [2024-11-20 08:53:54.467751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:23.791 [2024-11-20 08:53:54.467885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:23.791 [2024-11-20 08:53:54.467958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:23.791 [2024-11-20 08:53:54.468203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.791 spare 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.791 [2024-11-20 08:53:54.568364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:23.791 [2024-11-20 08:53:54.568440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:23.791 [2024-11-20 08:53:54.568870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:23.791 [2024-11-20 08:53:54.569436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:23.791 [2024-11-20 08:53:54.569496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:23.791 [2024-11-20 08:53:54.569907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.791 "name": "raid_bdev1", 00:20:23.791 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:23.791 "strip_size_kb": 0, 00:20:23.791 "state": "online", 00:20:23.791 "raid_level": "raid1", 00:20:23.791 "superblock": true, 00:20:23.791 "num_base_bdevs": 2, 00:20:23.791 "num_base_bdevs_discovered": 2, 00:20:23.791 "num_base_bdevs_operational": 2, 00:20:23.791 "base_bdevs_list": [ 00:20:23.791 { 00:20:23.791 "name": "spare", 00:20:23.791 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:23.791 "is_configured": true, 00:20:23.791 "data_offset": 256, 00:20:23.791 "data_size": 7936 00:20:23.791 }, 00:20:23.791 { 00:20:23.791 "name": "BaseBdev2", 00:20:23.791 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:23.791 "is_configured": true, 00:20:23.791 "data_offset": 256, 00:20:23.791 "data_size": 7936 00:20:23.791 } 00:20:23.791 ] 00:20:23.791 }' 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.791 08:53:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.359 "name": "raid_bdev1", 00:20:24.359 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:24.359 "strip_size_kb": 0, 00:20:24.359 "state": "online", 00:20:24.359 "raid_level": "raid1", 00:20:24.359 "superblock": true, 00:20:24.359 "num_base_bdevs": 2, 00:20:24.359 "num_base_bdevs_discovered": 2, 00:20:24.359 "num_base_bdevs_operational": 2, 00:20:24.359 "base_bdevs_list": [ 00:20:24.359 { 00:20:24.359 "name": "spare", 00:20:24.359 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:24.359 "is_configured": true, 00:20:24.359 "data_offset": 256, 00:20:24.359 "data_size": 7936 00:20:24.359 }, 00:20:24.359 { 00:20:24.359 "name": "BaseBdev2", 00:20:24.359 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:24.359 "is_configured": true, 00:20:24.359 "data_offset": 256, 00:20:24.359 "data_size": 7936 00:20:24.359 } 00:20:24.359 ] 00:20:24.359 }' 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:24.359 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.618 [2024-11-20 08:53:55.314032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.618 "name": "raid_bdev1", 00:20:24.618 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:24.618 "strip_size_kb": 0, 00:20:24.618 "state": "online", 00:20:24.618 "raid_level": "raid1", 00:20:24.618 "superblock": true, 00:20:24.618 "num_base_bdevs": 2, 00:20:24.618 "num_base_bdevs_discovered": 1, 00:20:24.618 "num_base_bdevs_operational": 1, 00:20:24.618 "base_bdevs_list": [ 00:20:24.618 { 00:20:24.618 "name": null, 00:20:24.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.618 "is_configured": false, 00:20:24.618 "data_offset": 0, 00:20:24.618 "data_size": 7936 00:20:24.618 }, 00:20:24.618 { 00:20:24.618 "name": "BaseBdev2", 00:20:24.618 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:24.618 "is_configured": true, 00:20:24.618 "data_offset": 256, 00:20:24.618 "data_size": 7936 00:20:24.618 } 00:20:24.618 ] 00:20:24.618 }' 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.618 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.197 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:25.197 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.198 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.198 [2024-11-20 08:53:55.878269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.198 [2024-11-20 08:53:55.878689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:25.198 [2024-11-20 08:53:55.878725] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:25.198 [2024-11-20 08:53:55.878782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.198 [2024-11-20 08:53:55.894407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:25.198 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.198 08:53:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:25.198 [2024-11-20 08:53:55.896948] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.133 "name": "raid_bdev1", 00:20:26.133 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:26.133 "strip_size_kb": 0, 00:20:26.133 "state": "online", 00:20:26.133 "raid_level": "raid1", 00:20:26.133 "superblock": true, 00:20:26.133 "num_base_bdevs": 2, 00:20:26.133 "num_base_bdevs_discovered": 2, 00:20:26.133 "num_base_bdevs_operational": 2, 00:20:26.133 "process": { 00:20:26.133 "type": "rebuild", 00:20:26.133 "target": "spare", 00:20:26.133 "progress": { 00:20:26.133 "blocks": 2560, 00:20:26.133 "percent": 32 00:20:26.133 } 00:20:26.133 }, 00:20:26.133 "base_bdevs_list": [ 00:20:26.133 { 00:20:26.133 "name": "spare", 00:20:26.133 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:26.133 "is_configured": true, 00:20:26.133 "data_offset": 256, 00:20:26.133 "data_size": 7936 00:20:26.133 }, 00:20:26.133 { 00:20:26.133 "name": "BaseBdev2", 00:20:26.133 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:26.133 "is_configured": true, 00:20:26.133 "data_offset": 256, 00:20:26.133 "data_size": 7936 00:20:26.133 } 00:20:26.133 ] 00:20:26.133 }' 00:20:26.133 08:53:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.133 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.133 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.393 [2024-11-20 08:53:57.058064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.393 [2024-11-20 08:53:57.105782] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:26.393 [2024-11-20 08:53:57.105916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.393 [2024-11-20 08:53:57.105943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.393 [2024-11-20 08:53:57.105958] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.393 "name": "raid_bdev1", 00:20:26.393 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:26.393 "strip_size_kb": 0, 00:20:26.393 "state": "online", 00:20:26.393 "raid_level": "raid1", 00:20:26.393 "superblock": true, 00:20:26.393 "num_base_bdevs": 2, 00:20:26.393 "num_base_bdevs_discovered": 1, 00:20:26.393 "num_base_bdevs_operational": 1, 00:20:26.393 "base_bdevs_list": [ 00:20:26.393 { 00:20:26.393 "name": null, 00:20:26.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.393 "is_configured": false, 00:20:26.393 "data_offset": 0, 00:20:26.393 "data_size": 7936 00:20:26.393 }, 00:20:26.393 { 00:20:26.393 "name": "BaseBdev2", 00:20:26.393 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:26.393 "is_configured": true, 00:20:26.393 "data_offset": 256, 00:20:26.393 "data_size": 7936 00:20:26.393 } 00:20:26.393 ] 00:20:26.393 }' 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.393 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.961 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:26.961 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.961 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.961 [2024-11-20 08:53:57.663848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:26.961 [2024-11-20 08:53:57.663993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.961 [2024-11-20 08:53:57.664026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:26.961 [2024-11-20 08:53:57.664044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.961 [2024-11-20 08:53:57.664851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.961 [2024-11-20 08:53:57.665043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:26.961 [2024-11-20 08:53:57.665297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:26.961 [2024-11-20 08:53:57.665332] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:26.961 [2024-11-20 08:53:57.665347] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:26.961 [2024-11-20 08:53:57.665395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.961 [2024-11-20 08:53:57.682220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:26.961 spare 00:20:26.961 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.961 08:53:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:26.961 [2024-11-20 08:53:57.685002] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.898 "name": "raid_bdev1", 00:20:27.898 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:27.898 "strip_size_kb": 0, 00:20:27.898 "state": "online", 00:20:27.898 "raid_level": "raid1", 00:20:27.898 "superblock": true, 00:20:27.898 "num_base_bdevs": 2, 00:20:27.898 "num_base_bdevs_discovered": 2, 00:20:27.898 "num_base_bdevs_operational": 2, 00:20:27.898 "process": { 00:20:27.898 "type": "rebuild", 00:20:27.898 "target": "spare", 00:20:27.898 "progress": { 00:20:27.898 "blocks": 2560, 00:20:27.898 "percent": 32 00:20:27.898 } 00:20:27.898 }, 00:20:27.898 "base_bdevs_list": [ 00:20:27.898 { 00:20:27.898 "name": "spare", 00:20:27.898 "uuid": "900f0667-14aa-5fd5-8738-eb54ca269166", 00:20:27.898 "is_configured": true, 00:20:27.898 "data_offset": 256, 00:20:27.898 "data_size": 7936 00:20:27.898 }, 00:20:27.898 { 00:20:27.898 "name": "BaseBdev2", 00:20:27.898 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:27.898 "is_configured": true, 00:20:27.898 "data_offset": 256, 00:20:27.898 "data_size": 7936 00:20:27.898 } 00:20:27.898 ] 00:20:27.898 }' 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.898 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.158 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.158 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:28.158 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.158 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.158 [2024-11-20 08:53:58.854700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.158 [2024-11-20 08:53:58.893666] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.158 [2024-11-20 08:53:58.893933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.158 [2024-11-20 08:53:58.894071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.158 [2024-11-20 08:53:58.894123] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.159 "name": "raid_bdev1", 00:20:28.159 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:28.159 "strip_size_kb": 0, 00:20:28.159 "state": "online", 00:20:28.159 "raid_level": "raid1", 00:20:28.159 "superblock": true, 00:20:28.159 "num_base_bdevs": 2, 00:20:28.159 "num_base_bdevs_discovered": 1, 00:20:28.159 "num_base_bdevs_operational": 1, 00:20:28.159 "base_bdevs_list": [ 00:20:28.159 { 00:20:28.159 "name": null, 00:20:28.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.159 "is_configured": false, 00:20:28.159 "data_offset": 0, 00:20:28.159 "data_size": 7936 00:20:28.159 }, 00:20:28.159 { 00:20:28.159 "name": "BaseBdev2", 00:20:28.159 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:28.159 "is_configured": true, 00:20:28.159 "data_offset": 256, 00:20:28.159 "data_size": 7936 00:20:28.159 } 00:20:28.159 ] 00:20:28.159 }' 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.159 08:53:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.730 "name": "raid_bdev1", 00:20:28.730 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:28.730 "strip_size_kb": 0, 00:20:28.730 "state": "online", 00:20:28.730 "raid_level": "raid1", 00:20:28.730 "superblock": true, 00:20:28.730 "num_base_bdevs": 2, 00:20:28.730 "num_base_bdevs_discovered": 1, 00:20:28.730 "num_base_bdevs_operational": 1, 00:20:28.730 "base_bdevs_list": [ 00:20:28.730 { 00:20:28.730 "name": null, 00:20:28.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.730 "is_configured": false, 00:20:28.730 "data_offset": 0, 00:20:28.730 "data_size": 7936 00:20:28.730 }, 00:20:28.730 { 00:20:28.730 "name": "BaseBdev2", 00:20:28.730 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:28.730 "is_configured": true, 00:20:28.730 "data_offset": 256, 00:20:28.730 "data_size": 7936 00:20:28.730 } 00:20:28.730 ] 00:20:28.730 }' 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.730 [2024-11-20 08:53:59.590499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:28.730 [2024-11-20 08:53:59.590563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.730 [2024-11-20 08:53:59.590596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:28.730 [2024-11-20 08:53:59.590625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.730 [2024-11-20 08:53:59.591184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.730 [2024-11-20 08:53:59.591211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:28.730 [2024-11-20 08:53:59.591319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:28.730 [2024-11-20 08:53:59.591341] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:28.730 [2024-11-20 08:53:59.591357] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:28.730 [2024-11-20 08:53:59.591370] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:28.730 BaseBdev1 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.730 08:53:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.107 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.108 "name": "raid_bdev1", 00:20:30.108 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:30.108 "strip_size_kb": 0, 00:20:30.108 "state": "online", 00:20:30.108 "raid_level": "raid1", 00:20:30.108 "superblock": true, 00:20:30.108 "num_base_bdevs": 2, 00:20:30.108 "num_base_bdevs_discovered": 1, 00:20:30.108 "num_base_bdevs_operational": 1, 00:20:30.108 "base_bdevs_list": [ 00:20:30.108 { 00:20:30.108 "name": null, 00:20:30.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.108 "is_configured": false, 00:20:30.108 "data_offset": 0, 00:20:30.108 "data_size": 7936 00:20:30.108 }, 00:20:30.108 { 00:20:30.108 "name": "BaseBdev2", 00:20:30.108 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:30.108 "is_configured": true, 00:20:30.108 "data_offset": 256, 00:20:30.108 "data_size": 7936 00:20:30.108 } 00:20:30.108 ] 00:20:30.108 }' 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.108 08:54:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.381 "name": "raid_bdev1", 00:20:30.381 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:30.381 "strip_size_kb": 0, 00:20:30.381 "state": "online", 00:20:30.381 "raid_level": "raid1", 00:20:30.381 "superblock": true, 00:20:30.381 "num_base_bdevs": 2, 00:20:30.381 "num_base_bdevs_discovered": 1, 00:20:30.381 "num_base_bdevs_operational": 1, 00:20:30.381 "base_bdevs_list": [ 00:20:30.381 { 00:20:30.381 "name": null, 00:20:30.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.381 "is_configured": false, 00:20:30.381 "data_offset": 0, 00:20:30.381 "data_size": 7936 00:20:30.381 }, 00:20:30.381 { 00:20:30.381 "name": "BaseBdev2", 00:20:30.381 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:30.381 "is_configured": true, 00:20:30.381 "data_offset": 256, 00:20:30.381 "data_size": 7936 00:20:30.381 } 00:20:30.381 ] 00:20:30.381 }' 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.381 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.661 [2024-11-20 08:54:01.290983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.661 [2024-11-20 08:54:01.291257] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:30.661 [2024-11-20 08:54:01.291300] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:30.661 request: 00:20:30.661 { 00:20:30.661 "base_bdev": "BaseBdev1", 00:20:30.661 "raid_bdev": "raid_bdev1", 00:20:30.661 "method": "bdev_raid_add_base_bdev", 00:20:30.661 "req_id": 1 00:20:30.661 } 00:20:30.661 Got JSON-RPC error response 00:20:30.661 response: 00:20:30.661 { 00:20:30.661 "code": -22, 00:20:30.661 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:30.661 } 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.661 08:54:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.618 "name": "raid_bdev1", 00:20:31.618 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:31.618 "strip_size_kb": 0, 00:20:31.618 "state": "online", 00:20:31.618 "raid_level": "raid1", 00:20:31.618 "superblock": true, 00:20:31.618 "num_base_bdevs": 2, 00:20:31.618 "num_base_bdevs_discovered": 1, 00:20:31.618 "num_base_bdevs_operational": 1, 00:20:31.618 "base_bdevs_list": [ 00:20:31.618 { 00:20:31.618 "name": null, 00:20:31.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.618 "is_configured": false, 00:20:31.618 "data_offset": 0, 00:20:31.618 "data_size": 7936 00:20:31.618 }, 00:20:31.618 { 00:20:31.618 "name": "BaseBdev2", 00:20:31.618 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:31.618 "is_configured": true, 00:20:31.618 "data_offset": 256, 00:20:31.618 "data_size": 7936 00:20:31.618 } 00:20:31.618 ] 00:20:31.618 }' 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.618 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.186 "name": "raid_bdev1", 00:20:32.186 "uuid": "01044237-c959-4b04-ab6a-fb74670d7551", 00:20:32.186 "strip_size_kb": 0, 00:20:32.186 "state": "online", 00:20:32.186 "raid_level": "raid1", 00:20:32.186 "superblock": true, 00:20:32.186 "num_base_bdevs": 2, 00:20:32.186 "num_base_bdevs_discovered": 1, 00:20:32.186 "num_base_bdevs_operational": 1, 00:20:32.186 "base_bdevs_list": [ 00:20:32.186 { 00:20:32.186 "name": null, 00:20:32.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.186 "is_configured": false, 00:20:32.186 "data_offset": 0, 00:20:32.186 "data_size": 7936 00:20:32.186 }, 00:20:32.186 { 00:20:32.186 "name": "BaseBdev2", 00:20:32.186 "uuid": "e317e23e-eb52-5f66-a2a4-2489c92e4c95", 00:20:32.186 "is_configured": true, 00:20:32.186 "data_offset": 256, 00:20:32.186 "data_size": 7936 00:20:32.186 } 00:20:32.186 ] 00:20:32.186 }' 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86922 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86922 ']' 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86922 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.186 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86922 00:20:32.186 killing process with pid 86922 00:20:32.186 Received shutdown signal, test time was about 60.000000 seconds 00:20:32.186 00:20:32.186 Latency(us) 00:20:32.186 [2024-11-20T08:54:03.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.186 [2024-11-20T08:54:03.102Z] =================================================================================================================== 00:20:32.186 [2024-11-20T08:54:03.102Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.186 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.186 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.186 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86922' 00:20:32.186 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86922 00:20:32.186 [2024-11-20 08:54:03.011311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.186 [2024-11-20 08:54:03.011468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.186 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86922 00:20:32.186 [2024-11-20 08:54:03.011553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.186 [2024-11-20 08:54:03.011574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:32.443 [2024-11-20 08:54:03.281635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.814 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:33.814 00:20:33.814 real 0m21.614s 00:20:33.814 user 0m29.469s 00:20:33.814 sys 0m2.472s 00:20:33.814 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.814 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.814 ************************************ 00:20:33.814 END TEST raid_rebuild_test_sb_4k 00:20:33.814 ************************************ 00:20:33.814 08:54:04 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:33.814 08:54:04 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:33.814 08:54:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:33.814 08:54:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.814 08:54:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.814 ************************************ 00:20:33.814 START TEST raid_state_function_test_sb_md_separate 00:20:33.814 ************************************ 00:20:33.814 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:33.814 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:33.814 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:33.815 Process raid pid: 87622 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87622 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87622' 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87622 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87622 ']' 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.815 08:54:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.815 [2024-11-20 08:54:04.459587] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:33.815 [2024-11-20 08:54:04.459945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.815 [2024-11-20 08:54:04.646273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.073 [2024-11-20 08:54:04.778701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.073 [2024-11-20 08:54:04.985612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.073 [2024-11-20 08:54:04.985873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.639 [2024-11-20 08:54:05.435943] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.639 [2024-11-20 08:54:05.436138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.639 [2024-11-20 08:54:05.436330] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.639 [2024-11-20 08:54:05.436497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.639 "name": "Existed_Raid", 00:20:34.639 "uuid": "ab10f929-a4ec-4f29-a7ce-8ccaad27a9ac", 00:20:34.639 "strip_size_kb": 0, 00:20:34.639 "state": "configuring", 00:20:34.639 "raid_level": "raid1", 00:20:34.639 "superblock": true, 00:20:34.639 "num_base_bdevs": 2, 00:20:34.639 "num_base_bdevs_discovered": 0, 00:20:34.639 "num_base_bdevs_operational": 2, 00:20:34.639 "base_bdevs_list": [ 00:20:34.639 { 00:20:34.639 "name": "BaseBdev1", 00:20:34.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.639 "is_configured": false, 00:20:34.639 "data_offset": 0, 00:20:34.639 "data_size": 0 00:20:34.639 }, 00:20:34.639 { 00:20:34.639 "name": "BaseBdev2", 00:20:34.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.639 "is_configured": false, 00:20:34.639 "data_offset": 0, 00:20:34.639 "data_size": 0 00:20:34.639 } 00:20:34.639 ] 00:20:34.639 }' 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.639 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 [2024-11-20 08:54:05.964071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.207 [2024-11-20 08:54:05.964114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 [2024-11-20 08:54:05.976065] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.207 [2024-11-20 08:54:05.976284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.207 [2024-11-20 08:54:05.976311] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.207 [2024-11-20 08:54:05.976333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 08:54:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 [2024-11-20 08:54:06.022471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.207 BaseBdev1 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 [ 00:20:35.207 { 00:20:35.207 "name": "BaseBdev1", 00:20:35.207 "aliases": [ 00:20:35.207 "6126d718-2c2a-4abd-8b2d-e832786487ed" 00:20:35.207 ], 00:20:35.207 "product_name": "Malloc disk", 00:20:35.207 "block_size": 4096, 00:20:35.207 "num_blocks": 8192, 00:20:35.207 "uuid": "6126d718-2c2a-4abd-8b2d-e832786487ed", 00:20:35.207 "md_size": 32, 00:20:35.207 "md_interleave": false, 00:20:35.207 "dif_type": 0, 00:20:35.207 "assigned_rate_limits": { 00:20:35.207 "rw_ios_per_sec": 0, 00:20:35.207 "rw_mbytes_per_sec": 0, 00:20:35.207 "r_mbytes_per_sec": 0, 00:20:35.207 "w_mbytes_per_sec": 0 00:20:35.207 }, 00:20:35.207 "claimed": true, 00:20:35.207 "claim_type": "exclusive_write", 00:20:35.207 "zoned": false, 00:20:35.207 "supported_io_types": { 00:20:35.207 "read": true, 00:20:35.207 "write": true, 00:20:35.207 "unmap": true, 00:20:35.207 "flush": true, 00:20:35.207 "reset": true, 00:20:35.207 "nvme_admin": false, 00:20:35.207 "nvme_io": false, 00:20:35.207 "nvme_io_md": false, 00:20:35.207 "write_zeroes": true, 00:20:35.207 "zcopy": true, 00:20:35.207 "get_zone_info": false, 00:20:35.207 "zone_management": false, 00:20:35.207 "zone_append": false, 00:20:35.207 "compare": false, 00:20:35.207 "compare_and_write": false, 00:20:35.207 "abort": true, 00:20:35.207 "seek_hole": false, 00:20:35.207 "seek_data": false, 00:20:35.207 "copy": true, 00:20:35.207 "nvme_iov_md": false 00:20:35.207 }, 00:20:35.207 "memory_domains": [ 00:20:35.207 { 00:20:35.207 "dma_device_id": "system", 00:20:35.207 "dma_device_type": 1 00:20:35.207 }, 00:20:35.207 { 00:20:35.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.207 "dma_device_type": 2 00:20:35.207 } 00:20:35.207 ], 00:20:35.207 "driver_specific": {} 00:20:35.207 } 00:20:35.207 ] 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.207 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.207 "name": "Existed_Raid", 00:20:35.207 "uuid": "e3675361-2ab9-4599-90cd-df22dc7273fc", 00:20:35.207 "strip_size_kb": 0, 00:20:35.207 "state": "configuring", 00:20:35.207 "raid_level": "raid1", 00:20:35.207 "superblock": true, 00:20:35.207 "num_base_bdevs": 2, 00:20:35.207 "num_base_bdevs_discovered": 1, 00:20:35.207 "num_base_bdevs_operational": 2, 00:20:35.207 "base_bdevs_list": [ 00:20:35.207 { 00:20:35.207 "name": "BaseBdev1", 00:20:35.207 "uuid": "6126d718-2c2a-4abd-8b2d-e832786487ed", 00:20:35.207 "is_configured": true, 00:20:35.207 "data_offset": 256, 00:20:35.207 "data_size": 7936 00:20:35.207 }, 00:20:35.207 { 00:20:35.207 "name": "BaseBdev2", 00:20:35.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.207 "is_configured": false, 00:20:35.207 "data_offset": 0, 00:20:35.208 "data_size": 0 00:20:35.208 } 00:20:35.208 ] 00:20:35.208 }' 00:20:35.208 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.208 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.774 [2024-11-20 08:54:06.578737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.774 [2024-11-20 08:54:06.578970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.774 [2024-11-20 08:54:06.586782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.774 [2024-11-20 08:54:06.589227] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.774 [2024-11-20 08:54:06.589281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.774 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.775 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.775 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.775 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.775 "name": "Existed_Raid", 00:20:35.775 "uuid": "1df33b79-b76c-4994-9252-1a365db071ef", 00:20:35.775 "strip_size_kb": 0, 00:20:35.775 "state": "configuring", 00:20:35.775 "raid_level": "raid1", 00:20:35.775 "superblock": true, 00:20:35.775 "num_base_bdevs": 2, 00:20:35.775 "num_base_bdevs_discovered": 1, 00:20:35.775 "num_base_bdevs_operational": 2, 00:20:35.775 "base_bdevs_list": [ 00:20:35.775 { 00:20:35.775 "name": "BaseBdev1", 00:20:35.775 "uuid": "6126d718-2c2a-4abd-8b2d-e832786487ed", 00:20:35.775 "is_configured": true, 00:20:35.775 "data_offset": 256, 00:20:35.775 "data_size": 7936 00:20:35.775 }, 00:20:35.775 { 00:20:35.775 "name": "BaseBdev2", 00:20:35.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.775 "is_configured": false, 00:20:35.775 "data_offset": 0, 00:20:35.775 "data_size": 0 00:20:35.775 } 00:20:35.775 ] 00:20:35.775 }' 00:20:35.775 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.775 08:54:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.342 [2024-11-20 08:54:07.155608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.342 [2024-11-20 08:54:07.156248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:36.342 [2024-11-20 08:54:07.156276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:36.342 [2024-11-20 08:54:07.156379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:36.342 [2024-11-20 08:54:07.156538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:36.342 [2024-11-20 08:54:07.156573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:36.342 BaseBdev2 00:20:36.342 [2024-11-20 08:54:07.156688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.342 [ 00:20:36.342 { 00:20:36.342 "name": "BaseBdev2", 00:20:36.342 "aliases": [ 00:20:36.342 "6af11cb4-915e-48ab-99c6-4d7f1ffa1885" 00:20:36.342 ], 00:20:36.342 "product_name": "Malloc disk", 00:20:36.342 "block_size": 4096, 00:20:36.342 "num_blocks": 8192, 00:20:36.342 "uuid": "6af11cb4-915e-48ab-99c6-4d7f1ffa1885", 00:20:36.342 "md_size": 32, 00:20:36.342 "md_interleave": false, 00:20:36.342 "dif_type": 0, 00:20:36.342 "assigned_rate_limits": { 00:20:36.342 "rw_ios_per_sec": 0, 00:20:36.342 "rw_mbytes_per_sec": 0, 00:20:36.342 "r_mbytes_per_sec": 0, 00:20:36.342 "w_mbytes_per_sec": 0 00:20:36.342 }, 00:20:36.342 "claimed": true, 00:20:36.342 "claim_type": "exclusive_write", 00:20:36.342 "zoned": false, 00:20:36.342 "supported_io_types": { 00:20:36.342 "read": true, 00:20:36.342 "write": true, 00:20:36.342 "unmap": true, 00:20:36.342 "flush": true, 00:20:36.342 "reset": true, 00:20:36.342 "nvme_admin": false, 00:20:36.342 "nvme_io": false, 00:20:36.342 "nvme_io_md": false, 00:20:36.342 "write_zeroes": true, 00:20:36.342 "zcopy": true, 00:20:36.342 "get_zone_info": false, 00:20:36.342 "zone_management": false, 00:20:36.342 "zone_append": false, 00:20:36.342 "compare": false, 00:20:36.342 "compare_and_write": false, 00:20:36.342 "abort": true, 00:20:36.342 "seek_hole": false, 00:20:36.342 "seek_data": false, 00:20:36.342 "copy": true, 00:20:36.342 "nvme_iov_md": false 00:20:36.342 }, 00:20:36.342 "memory_domains": [ 00:20:36.342 { 00:20:36.342 "dma_device_id": "system", 00:20:36.342 "dma_device_type": 1 00:20:36.342 }, 00:20:36.342 { 00:20:36.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.342 "dma_device_type": 2 00:20:36.342 } 00:20:36.342 ], 00:20:36.342 "driver_specific": {} 00:20:36.342 } 00:20:36.342 ] 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.342 "name": "Existed_Raid", 00:20:36.342 "uuid": "1df33b79-b76c-4994-9252-1a365db071ef", 00:20:36.342 "strip_size_kb": 0, 00:20:36.342 "state": "online", 00:20:36.342 "raid_level": "raid1", 00:20:36.342 "superblock": true, 00:20:36.342 "num_base_bdevs": 2, 00:20:36.342 "num_base_bdevs_discovered": 2, 00:20:36.342 "num_base_bdevs_operational": 2, 00:20:36.342 "base_bdevs_list": [ 00:20:36.342 { 00:20:36.342 "name": "BaseBdev1", 00:20:36.342 "uuid": "6126d718-2c2a-4abd-8b2d-e832786487ed", 00:20:36.342 "is_configured": true, 00:20:36.342 "data_offset": 256, 00:20:36.342 "data_size": 7936 00:20:36.342 }, 00:20:36.342 { 00:20:36.342 "name": "BaseBdev2", 00:20:36.342 "uuid": "6af11cb4-915e-48ab-99c6-4d7f1ffa1885", 00:20:36.342 "is_configured": true, 00:20:36.342 "data_offset": 256, 00:20:36.342 "data_size": 7936 00:20:36.342 } 00:20:36.342 ] 00:20:36.342 }' 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.342 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.909 [2024-11-20 08:54:07.712292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.909 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.909 "name": "Existed_Raid", 00:20:36.909 "aliases": [ 00:20:36.909 "1df33b79-b76c-4994-9252-1a365db071ef" 00:20:36.909 ], 00:20:36.909 "product_name": "Raid Volume", 00:20:36.909 "block_size": 4096, 00:20:36.909 "num_blocks": 7936, 00:20:36.909 "uuid": "1df33b79-b76c-4994-9252-1a365db071ef", 00:20:36.909 "md_size": 32, 00:20:36.909 "md_interleave": false, 00:20:36.909 "dif_type": 0, 00:20:36.909 "assigned_rate_limits": { 00:20:36.909 "rw_ios_per_sec": 0, 00:20:36.909 "rw_mbytes_per_sec": 0, 00:20:36.909 "r_mbytes_per_sec": 0, 00:20:36.909 "w_mbytes_per_sec": 0 00:20:36.909 }, 00:20:36.909 "claimed": false, 00:20:36.909 "zoned": false, 00:20:36.909 "supported_io_types": { 00:20:36.909 "read": true, 00:20:36.909 "write": true, 00:20:36.909 "unmap": false, 00:20:36.909 "flush": false, 00:20:36.909 "reset": true, 00:20:36.909 "nvme_admin": false, 00:20:36.909 "nvme_io": false, 00:20:36.909 "nvme_io_md": false, 00:20:36.909 "write_zeroes": true, 00:20:36.909 "zcopy": false, 00:20:36.909 "get_zone_info": false, 00:20:36.909 "zone_management": false, 00:20:36.909 "zone_append": false, 00:20:36.909 "compare": false, 00:20:36.909 "compare_and_write": false, 00:20:36.909 "abort": false, 00:20:36.909 "seek_hole": false, 00:20:36.909 "seek_data": false, 00:20:36.909 "copy": false, 00:20:36.909 "nvme_iov_md": false 00:20:36.909 }, 00:20:36.909 "memory_domains": [ 00:20:36.909 { 00:20:36.909 "dma_device_id": "system", 00:20:36.909 "dma_device_type": 1 00:20:36.909 }, 00:20:36.909 { 00:20:36.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.909 "dma_device_type": 2 00:20:36.909 }, 00:20:36.909 { 00:20:36.909 "dma_device_id": "system", 00:20:36.909 "dma_device_type": 1 00:20:36.909 }, 00:20:36.909 { 00:20:36.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.910 "dma_device_type": 2 00:20:36.910 } 00:20:36.910 ], 00:20:36.910 "driver_specific": { 00:20:36.910 "raid": { 00:20:36.910 "uuid": "1df33b79-b76c-4994-9252-1a365db071ef", 00:20:36.910 "strip_size_kb": 0, 00:20:36.910 "state": "online", 00:20:36.910 "raid_level": "raid1", 00:20:36.910 "superblock": true, 00:20:36.910 "num_base_bdevs": 2, 00:20:36.910 "num_base_bdevs_discovered": 2, 00:20:36.910 "num_base_bdevs_operational": 2, 00:20:36.910 "base_bdevs_list": [ 00:20:36.910 { 00:20:36.910 "name": "BaseBdev1", 00:20:36.910 "uuid": "6126d718-2c2a-4abd-8b2d-e832786487ed", 00:20:36.910 "is_configured": true, 00:20:36.910 "data_offset": 256, 00:20:36.910 "data_size": 7936 00:20:36.910 }, 00:20:36.910 { 00:20:36.910 "name": "BaseBdev2", 00:20:36.910 "uuid": "6af11cb4-915e-48ab-99c6-4d7f1ffa1885", 00:20:36.910 "is_configured": true, 00:20:36.910 "data_offset": 256, 00:20:36.910 "data_size": 7936 00:20:36.910 } 00:20:36.910 ] 00:20:36.910 } 00:20:36.910 } 00:20:36.910 }' 00:20:36.910 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.910 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:36.910 BaseBdev2' 00:20:36.910 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:37.169 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:37.170 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.170 08:54:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.170 [2024-11-20 08:54:07.959972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.170 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.429 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.429 "name": "Existed_Raid", 00:20:37.429 "uuid": "1df33b79-b76c-4994-9252-1a365db071ef", 00:20:37.429 "strip_size_kb": 0, 00:20:37.429 "state": "online", 00:20:37.429 "raid_level": "raid1", 00:20:37.429 "superblock": true, 00:20:37.429 "num_base_bdevs": 2, 00:20:37.429 "num_base_bdevs_discovered": 1, 00:20:37.429 "num_base_bdevs_operational": 1, 00:20:37.429 "base_bdevs_list": [ 00:20:37.429 { 00:20:37.429 "name": null, 00:20:37.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.429 "is_configured": false, 00:20:37.429 "data_offset": 0, 00:20:37.429 "data_size": 7936 00:20:37.429 }, 00:20:37.429 { 00:20:37.429 "name": "BaseBdev2", 00:20:37.429 "uuid": "6af11cb4-915e-48ab-99c6-4d7f1ffa1885", 00:20:37.429 "is_configured": true, 00:20:37.429 "data_offset": 256, 00:20:37.429 "data_size": 7936 00:20:37.429 } 00:20:37.429 ] 00:20:37.429 }' 00:20:37.429 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.429 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.688 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.688 [2024-11-20 08:54:08.583056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:37.688 [2024-11-20 08:54:08.583211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.946 [2024-11-20 08:54:08.671905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.947 [2024-11-20 08:54:08.671985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.947 [2024-11-20 08:54:08.672008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87622 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87622 ']' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87622 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87622 00:20:37.947 killing process with pid 87622 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87622' 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87622 00:20:37.947 [2024-11-20 08:54:08.758668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.947 08:54:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87622 00:20:37.947 [2024-11-20 08:54:08.773684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.322 08:54:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:39.322 00:20:39.322 real 0m5.468s 00:20:39.322 user 0m8.201s 00:20:39.322 sys 0m0.812s 00:20:39.322 ************************************ 00:20:39.322 END TEST raid_state_function_test_sb_md_separate 00:20:39.322 ************************************ 00:20:39.322 08:54:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.322 08:54:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.322 08:54:09 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:39.322 08:54:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:39.322 08:54:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.322 08:54:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.322 ************************************ 00:20:39.322 START TEST raid_superblock_test_md_separate 00:20:39.322 ************************************ 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:39.322 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:39.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87879 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87879 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87879 ']' 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.323 08:54:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 [2024-11-20 08:54:10.012684] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:39.323 [2024-11-20 08:54:10.013368] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87879 ] 00:20:39.323 [2024-11-20 08:54:10.198028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.582 [2024-11-20 08:54:10.364100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.841 [2024-11-20 08:54:10.572415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.841 [2024-11-20 08:54:10.572467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.100 08:54:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.100 malloc1 00:20:40.100 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.100 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:40.100 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.100 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.100 [2024-11-20 08:54:11.013630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:40.100 [2024-11-20 08:54:11.013846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.100 [2024-11-20 08:54:11.013893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:40.100 [2024-11-20 08:54:11.013911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.360 [2024-11-20 08:54:11.016469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.360 [2024-11-20 08:54:11.016517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:40.360 pt1 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.360 malloc2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.360 [2024-11-20 08:54:11.067198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.360 [2024-11-20 08:54:11.067408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.360 [2024-11-20 08:54:11.067453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:40.360 [2024-11-20 08:54:11.067469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.360 [2024-11-20 08:54:11.070002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.360 [2024-11-20 08:54:11.070047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.360 pt2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.360 [2024-11-20 08:54:11.075255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:40.360 [2024-11-20 08:54:11.077671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.360 [2024-11-20 08:54:11.077909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:40.360 [2024-11-20 08:54:11.077932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:40.360 [2024-11-20 08:54:11.078033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:40.360 [2024-11-20 08:54:11.078246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:40.360 [2024-11-20 08:54:11.078270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:40.360 [2024-11-20 08:54:11.078406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.360 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.361 "name": "raid_bdev1", 00:20:40.361 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:40.361 "strip_size_kb": 0, 00:20:40.361 "state": "online", 00:20:40.361 "raid_level": "raid1", 00:20:40.361 "superblock": true, 00:20:40.361 "num_base_bdevs": 2, 00:20:40.361 "num_base_bdevs_discovered": 2, 00:20:40.361 "num_base_bdevs_operational": 2, 00:20:40.361 "base_bdevs_list": [ 00:20:40.361 { 00:20:40.361 "name": "pt1", 00:20:40.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.361 "is_configured": true, 00:20:40.361 "data_offset": 256, 00:20:40.361 "data_size": 7936 00:20:40.361 }, 00:20:40.361 { 00:20:40.361 "name": "pt2", 00:20:40.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.361 "is_configured": true, 00:20:40.361 "data_offset": 256, 00:20:40.361 "data_size": 7936 00:20:40.361 } 00:20:40.361 ] 00:20:40.361 }' 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.361 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.929 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.930 [2024-11-20 08:54:11.587824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.930 "name": "raid_bdev1", 00:20:40.930 "aliases": [ 00:20:40.930 "37f8f07d-725e-4cf9-bc5b-116d32f5ca14" 00:20:40.930 ], 00:20:40.930 "product_name": "Raid Volume", 00:20:40.930 "block_size": 4096, 00:20:40.930 "num_blocks": 7936, 00:20:40.930 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:40.930 "md_size": 32, 00:20:40.930 "md_interleave": false, 00:20:40.930 "dif_type": 0, 00:20:40.930 "assigned_rate_limits": { 00:20:40.930 "rw_ios_per_sec": 0, 00:20:40.930 "rw_mbytes_per_sec": 0, 00:20:40.930 "r_mbytes_per_sec": 0, 00:20:40.930 "w_mbytes_per_sec": 0 00:20:40.930 }, 00:20:40.930 "claimed": false, 00:20:40.930 "zoned": false, 00:20:40.930 "supported_io_types": { 00:20:40.930 "read": true, 00:20:40.930 "write": true, 00:20:40.930 "unmap": false, 00:20:40.930 "flush": false, 00:20:40.930 "reset": true, 00:20:40.930 "nvme_admin": false, 00:20:40.930 "nvme_io": false, 00:20:40.930 "nvme_io_md": false, 00:20:40.930 "write_zeroes": true, 00:20:40.930 "zcopy": false, 00:20:40.930 "get_zone_info": false, 00:20:40.930 "zone_management": false, 00:20:40.930 "zone_append": false, 00:20:40.930 "compare": false, 00:20:40.930 "compare_and_write": false, 00:20:40.930 "abort": false, 00:20:40.930 "seek_hole": false, 00:20:40.930 "seek_data": false, 00:20:40.930 "copy": false, 00:20:40.930 "nvme_iov_md": false 00:20:40.930 }, 00:20:40.930 "memory_domains": [ 00:20:40.930 { 00:20:40.930 "dma_device_id": "system", 00:20:40.930 "dma_device_type": 1 00:20:40.930 }, 00:20:40.930 { 00:20:40.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.930 "dma_device_type": 2 00:20:40.930 }, 00:20:40.930 { 00:20:40.930 "dma_device_id": "system", 00:20:40.930 "dma_device_type": 1 00:20:40.930 }, 00:20:40.930 { 00:20:40.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.930 "dma_device_type": 2 00:20:40.930 } 00:20:40.930 ], 00:20:40.930 "driver_specific": { 00:20:40.930 "raid": { 00:20:40.930 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:40.930 "strip_size_kb": 0, 00:20:40.930 "state": "online", 00:20:40.930 "raid_level": "raid1", 00:20:40.930 "superblock": true, 00:20:40.930 "num_base_bdevs": 2, 00:20:40.930 "num_base_bdevs_discovered": 2, 00:20:40.930 "num_base_bdevs_operational": 2, 00:20:40.930 "base_bdevs_list": [ 00:20:40.930 { 00:20:40.930 "name": "pt1", 00:20:40.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.930 "is_configured": true, 00:20:40.930 "data_offset": 256, 00:20:40.930 "data_size": 7936 00:20:40.930 }, 00:20:40.930 { 00:20:40.930 "name": "pt2", 00:20:40.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.930 "is_configured": true, 00:20:40.930 "data_offset": 256, 00:20:40.930 "data_size": 7936 00:20:40.930 } 00:20:40.930 ] 00:20:40.930 } 00:20:40.930 } 00:20:40.930 }' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:40.930 pt2' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:40.930 [2024-11-20 08:54:11.823825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.930 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=37f8f07d-725e-4cf9-bc5b-116d32f5ca14 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 37f8f07d-725e-4cf9-bc5b-116d32f5ca14 ']' 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.190 [2024-11-20 08:54:11.875501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.190 [2024-11-20 08:54:11.875535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.190 [2024-11-20 08:54:11.875652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.190 [2024-11-20 08:54:11.875733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.190 [2024-11-20 08:54:11.875754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.190 08:54:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:41.190 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.190 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.190 [2024-11-20 08:54:12.007853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:41.190 [2024-11-20 08:54:12.010601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:41.190 [2024-11-20 08:54:12.010834] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:41.190 [2024-11-20 08:54:12.011096] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:41.190 [2024-11-20 08:54:12.011134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.190 [2024-11-20 08:54:12.011173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:41.190 request: 00:20:41.190 { 00:20:41.191 "name": "raid_bdev1", 00:20:41.191 "raid_level": "raid1", 00:20:41.191 "base_bdevs": [ 00:20:41.191 "malloc1", 00:20:41.191 "malloc2" 00:20:41.191 ], 00:20:41.191 "superblock": false, 00:20:41.191 "method": "bdev_raid_create", 00:20:41.191 "req_id": 1 00:20:41.191 } 00:20:41.191 Got JSON-RPC error response 00:20:41.191 response: 00:20:41.191 { 00:20:41.191 "code": -17, 00:20:41.191 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:41.191 } 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.191 [2024-11-20 08:54:12.071847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:41.191 [2024-11-20 08:54:12.072027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.191 [2024-11-20 08:54:12.072169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:41.191 [2024-11-20 08:54:12.072347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.191 [2024-11-20 08:54:12.074959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.191 [2024-11-20 08:54:12.075125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:41.191 [2024-11-20 08:54:12.075341] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:41.191 [2024-11-20 08:54:12.075518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:41.191 pt1 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.191 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.450 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.450 "name": "raid_bdev1", 00:20:41.450 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:41.450 "strip_size_kb": 0, 00:20:41.450 "state": "configuring", 00:20:41.450 "raid_level": "raid1", 00:20:41.450 "superblock": true, 00:20:41.450 "num_base_bdevs": 2, 00:20:41.450 "num_base_bdevs_discovered": 1, 00:20:41.450 "num_base_bdevs_operational": 2, 00:20:41.450 "base_bdevs_list": [ 00:20:41.450 { 00:20:41.450 "name": "pt1", 00:20:41.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.450 "is_configured": true, 00:20:41.450 "data_offset": 256, 00:20:41.450 "data_size": 7936 00:20:41.450 }, 00:20:41.450 { 00:20:41.450 "name": null, 00:20:41.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.450 "is_configured": false, 00:20:41.450 "data_offset": 256, 00:20:41.450 "data_size": 7936 00:20:41.450 } 00:20:41.450 ] 00:20:41.450 }' 00:20:41.450 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.450 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.711 [2024-11-20 08:54:12.564015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:41.711 [2024-11-20 08:54:12.564306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.711 [2024-11-20 08:54:12.564349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:41.711 [2024-11-20 08:54:12.564368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.711 [2024-11-20 08:54:12.564684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.711 [2024-11-20 08:54:12.564720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:41.711 [2024-11-20 08:54:12.564801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:41.711 [2024-11-20 08:54:12.564837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:41.711 [2024-11-20 08:54:12.564992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:41.711 [2024-11-20 08:54:12.565018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:41.711 [2024-11-20 08:54:12.565117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:41.711 [2024-11-20 08:54:12.565285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:41.711 [2024-11-20 08:54:12.565302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:41.711 [2024-11-20 08:54:12.565440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.711 pt2 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.711 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.711 "name": "raid_bdev1", 00:20:41.711 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:41.711 "strip_size_kb": 0, 00:20:41.711 "state": "online", 00:20:41.711 "raid_level": "raid1", 00:20:41.711 "superblock": true, 00:20:41.711 "num_base_bdevs": 2, 00:20:41.711 "num_base_bdevs_discovered": 2, 00:20:41.711 "num_base_bdevs_operational": 2, 00:20:41.711 "base_bdevs_list": [ 00:20:41.711 { 00:20:41.711 "name": "pt1", 00:20:41.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.711 "is_configured": true, 00:20:41.711 "data_offset": 256, 00:20:41.711 "data_size": 7936 00:20:41.711 }, 00:20:41.711 { 00:20:41.711 "name": "pt2", 00:20:41.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.711 "is_configured": true, 00:20:41.711 "data_offset": 256, 00:20:41.711 "data_size": 7936 00:20:41.711 } 00:20:41.711 ] 00:20:41.711 }' 00:20:41.970 08:54:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.970 08:54:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.229 [2024-11-20 08:54:13.056504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.229 "name": "raid_bdev1", 00:20:42.229 "aliases": [ 00:20:42.229 "37f8f07d-725e-4cf9-bc5b-116d32f5ca14" 00:20:42.229 ], 00:20:42.229 "product_name": "Raid Volume", 00:20:42.229 "block_size": 4096, 00:20:42.229 "num_blocks": 7936, 00:20:42.229 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:42.229 "md_size": 32, 00:20:42.229 "md_interleave": false, 00:20:42.229 "dif_type": 0, 00:20:42.229 "assigned_rate_limits": { 00:20:42.229 "rw_ios_per_sec": 0, 00:20:42.229 "rw_mbytes_per_sec": 0, 00:20:42.229 "r_mbytes_per_sec": 0, 00:20:42.229 "w_mbytes_per_sec": 0 00:20:42.229 }, 00:20:42.229 "claimed": false, 00:20:42.229 "zoned": false, 00:20:42.229 "supported_io_types": { 00:20:42.229 "read": true, 00:20:42.229 "write": true, 00:20:42.229 "unmap": false, 00:20:42.229 "flush": false, 00:20:42.229 "reset": true, 00:20:42.229 "nvme_admin": false, 00:20:42.229 "nvme_io": false, 00:20:42.229 "nvme_io_md": false, 00:20:42.229 "write_zeroes": true, 00:20:42.229 "zcopy": false, 00:20:42.229 "get_zone_info": false, 00:20:42.229 "zone_management": false, 00:20:42.229 "zone_append": false, 00:20:42.229 "compare": false, 00:20:42.229 "compare_and_write": false, 00:20:42.229 "abort": false, 00:20:42.229 "seek_hole": false, 00:20:42.229 "seek_data": false, 00:20:42.229 "copy": false, 00:20:42.229 "nvme_iov_md": false 00:20:42.229 }, 00:20:42.229 "memory_domains": [ 00:20:42.229 { 00:20:42.229 "dma_device_id": "system", 00:20:42.229 "dma_device_type": 1 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.229 "dma_device_type": 2 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "dma_device_id": "system", 00:20:42.229 "dma_device_type": 1 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.229 "dma_device_type": 2 00:20:42.229 } 00:20:42.229 ], 00:20:42.229 "driver_specific": { 00:20:42.229 "raid": { 00:20:42.229 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:42.229 "strip_size_kb": 0, 00:20:42.229 "state": "online", 00:20:42.229 "raid_level": "raid1", 00:20:42.229 "superblock": true, 00:20:42.229 "num_base_bdevs": 2, 00:20:42.229 "num_base_bdevs_discovered": 2, 00:20:42.229 "num_base_bdevs_operational": 2, 00:20:42.229 "base_bdevs_list": [ 00:20:42.229 { 00:20:42.229 "name": "pt1", 00:20:42.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.229 "is_configured": true, 00:20:42.229 "data_offset": 256, 00:20:42.229 "data_size": 7936 00:20:42.229 }, 00:20:42.229 { 00:20:42.229 "name": "pt2", 00:20:42.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.229 "is_configured": true, 00:20:42.229 "data_offset": 256, 00:20:42.229 "data_size": 7936 00:20:42.229 } 00:20:42.229 ] 00:20:42.229 } 00:20:42.229 } 00:20:42.229 }' 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:42.229 pt2' 00:20:42.229 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.488 [2024-11-20 08:54:13.292527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 37f8f07d-725e-4cf9-bc5b-116d32f5ca14 '!=' 37f8f07d-725e-4cf9-bc5b-116d32f5ca14 ']' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.488 [2024-11-20 08:54:13.344257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.488 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.488 "name": "raid_bdev1", 00:20:42.488 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:42.488 "strip_size_kb": 0, 00:20:42.488 "state": "online", 00:20:42.488 "raid_level": "raid1", 00:20:42.488 "superblock": true, 00:20:42.488 "num_base_bdevs": 2, 00:20:42.488 "num_base_bdevs_discovered": 1, 00:20:42.488 "num_base_bdevs_operational": 1, 00:20:42.488 "base_bdevs_list": [ 00:20:42.488 { 00:20:42.488 "name": null, 00:20:42.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.488 "is_configured": false, 00:20:42.488 "data_offset": 0, 00:20:42.488 "data_size": 7936 00:20:42.488 }, 00:20:42.488 { 00:20:42.489 "name": "pt2", 00:20:42.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.489 "is_configured": true, 00:20:42.489 "data_offset": 256, 00:20:42.489 "data_size": 7936 00:20:42.489 } 00:20:42.489 ] 00:20:42.489 }' 00:20:42.489 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.489 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.055 [2024-11-20 08:54:13.872365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.055 [2024-11-20 08:54:13.872533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.055 [2024-11-20 08:54:13.872770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.055 [2024-11-20 08:54:13.872961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.055 [2024-11-20 08:54:13.873129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.055 [2024-11-20 08:54:13.948380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:43.055 [2024-11-20 08:54:13.948458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.055 [2024-11-20 08:54:13.948485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:43.055 [2024-11-20 08:54:13.948504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.055 [2024-11-20 08:54:13.951123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.055 [2024-11-20 08:54:13.951307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:43.055 [2024-11-20 08:54:13.951387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:43.055 [2024-11-20 08:54:13.951454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:43.055 [2024-11-20 08:54:13.951598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:43.055 [2024-11-20 08:54:13.951622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:43.055 [2024-11-20 08:54:13.951715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:43.055 [2024-11-20 08:54:13.951863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:43.055 [2024-11-20 08:54:13.951879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:43.055 [2024-11-20 08:54:13.951996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.055 pt2 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.055 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.056 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.314 08:54:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.314 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.314 "name": "raid_bdev1", 00:20:43.314 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:43.314 "strip_size_kb": 0, 00:20:43.314 "state": "online", 00:20:43.314 "raid_level": "raid1", 00:20:43.314 "superblock": true, 00:20:43.314 "num_base_bdevs": 2, 00:20:43.314 "num_base_bdevs_discovered": 1, 00:20:43.314 "num_base_bdevs_operational": 1, 00:20:43.314 "base_bdevs_list": [ 00:20:43.314 { 00:20:43.314 "name": null, 00:20:43.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.314 "is_configured": false, 00:20:43.314 "data_offset": 256, 00:20:43.314 "data_size": 7936 00:20:43.314 }, 00:20:43.314 { 00:20:43.314 "name": "pt2", 00:20:43.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.314 "is_configured": true, 00:20:43.314 "data_offset": 256, 00:20:43.314 "data_size": 7936 00:20:43.314 } 00:20:43.314 ] 00:20:43.314 }' 00:20:43.314 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.314 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 [2024-11-20 08:54:14.444486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.573 [2024-11-20 08:54:14.444523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.573 [2024-11-20 08:54:14.444612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.573 [2024-11-20 08:54:14.444682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.573 [2024-11-20 08:54:14.444698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.832 [2024-11-20 08:54:14.508522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:43.832 [2024-11-20 08:54:14.508713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.832 [2024-11-20 08:54:14.508789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:43.832 [2024-11-20 08:54:14.508964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.832 [2024-11-20 08:54:14.511811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.832 [2024-11-20 08:54:14.511962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:43.832 [2024-11-20 08:54:14.512161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:43.832 [2024-11-20 08:54:14.512354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:43.832 [2024-11-20 08:54:14.512634] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:43.832 [2024-11-20 08:54:14.512788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.832 [2024-11-20 08:54:14.512903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:43.832 [2024-11-20 08:54:14.513109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:43.832 [2024-11-20 08:54:14.513376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:43.832 [2024-11-20 08:54:14.513501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:43.832 [2024-11-20 08:54:14.513649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:43.832 pt1 00:20:43.832 [2024-11-20 08:54:14.513909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:43.832 [2024-11-20 08:54:14.514043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.832 [2024-11-20 08:54:14.514271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.832 "name": "raid_bdev1", 00:20:43.832 "uuid": "37f8f07d-725e-4cf9-bc5b-116d32f5ca14", 00:20:43.832 "strip_size_kb": 0, 00:20:43.832 "state": "online", 00:20:43.832 "raid_level": "raid1", 00:20:43.832 "superblock": true, 00:20:43.832 "num_base_bdevs": 2, 00:20:43.832 "num_base_bdevs_discovered": 1, 00:20:43.832 "num_base_bdevs_operational": 1, 00:20:43.832 "base_bdevs_list": [ 00:20:43.832 { 00:20:43.832 "name": null, 00:20:43.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.832 "is_configured": false, 00:20:43.832 "data_offset": 256, 00:20:43.832 "data_size": 7936 00:20:43.832 }, 00:20:43.832 { 00:20:43.832 "name": "pt2", 00:20:43.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.832 "is_configured": true, 00:20:43.832 "data_offset": 256, 00:20:43.832 "data_size": 7936 00:20:43.832 } 00:20:43.832 ] 00:20:43.832 }' 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.832 08:54:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.399 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:44.399 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.399 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:44.400 [2024-11-20 08:54:15.085190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 37f8f07d-725e-4cf9-bc5b-116d32f5ca14 '!=' 37f8f07d-725e-4cf9-bc5b-116d32f5ca14 ']' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87879 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87879 ']' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87879 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87879 00:20:44.400 killing process with pid 87879 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87879' 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87879 00:20:44.400 [2024-11-20 08:54:15.161636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.400 08:54:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87879 00:20:44.400 [2024-11-20 08:54:15.161743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.400 [2024-11-20 08:54:15.161809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.400 [2024-11-20 08:54:15.161835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:44.658 [2024-11-20 08:54:15.355740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.594 08:54:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:45.594 00:20:45.594 real 0m6.515s 00:20:45.594 user 0m10.277s 00:20:45.594 sys 0m0.942s 00:20:45.594 08:54:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.594 08:54:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.594 ************************************ 00:20:45.594 END TEST raid_superblock_test_md_separate 00:20:45.594 ************************************ 00:20:45.594 08:54:16 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:45.594 08:54:16 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:45.594 08:54:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:45.594 08:54:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.594 08:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.594 ************************************ 00:20:45.594 START TEST raid_rebuild_test_sb_md_separate 00:20:45.594 ************************************ 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88206 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88206 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88206 ']' 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.594 08:54:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:45.853 [2024-11-20 08:54:16.556067] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:20:45.853 [2024-11-20 08:54:16.556289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88206 ] 00:20:45.853 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:45.853 Zero copy mechanism will not be used. 00:20:45.853 [2024-11-20 08:54:16.748359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.111 [2024-11-20 08:54:16.900377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.370 [2024-11-20 08:54:17.104924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.370 [2024-11-20 08:54:17.105006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.628 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.628 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:46.628 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.628 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:46.628 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.628 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 BaseBdev1_malloc 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 [2024-11-20 08:54:17.582788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.889 [2024-11-20 08:54:17.583032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.889 [2024-11-20 08:54:17.583134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:46.889 [2024-11-20 08:54:17.583343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.889 [2024-11-20 08:54:17.585925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.889 [2024-11-20 08:54:17.585986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.889 BaseBdev1 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 BaseBdev2_malloc 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 [2024-11-20 08:54:17.635796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:46.889 [2024-11-20 08:54:17.635881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.889 [2024-11-20 08:54:17.635917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:46.889 [2024-11-20 08:54:17.635941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.889 [2024-11-20 08:54:17.638571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.889 [2024-11-20 08:54:17.638628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:46.889 BaseBdev2 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 spare_malloc 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 spare_delay 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 [2024-11-20 08:54:17.720055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:46.889 [2024-11-20 08:54:17.720160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.889 [2024-11-20 08:54:17.720211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:46.889 [2024-11-20 08:54:17.720248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.889 [2024-11-20 08:54:17.722814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.889 [2024-11-20 08:54:17.722873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:46.889 spare 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.889 [2024-11-20 08:54:17.728129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.889 [2024-11-20 08:54:17.730746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.889 [2024-11-20 08:54:17.731186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:46.889 [2024-11-20 08:54:17.731353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:46.889 [2024-11-20 08:54:17.731598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:46.889 [2024-11-20 08:54:17.731991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:46.889 [2024-11-20 08:54:17.732027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:46.889 [2024-11-20 08:54:17.732263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.889 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.890 "name": "raid_bdev1", 00:20:46.890 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:46.890 "strip_size_kb": 0, 00:20:46.890 "state": "online", 00:20:46.890 "raid_level": "raid1", 00:20:46.890 "superblock": true, 00:20:46.890 "num_base_bdevs": 2, 00:20:46.890 "num_base_bdevs_discovered": 2, 00:20:46.890 "num_base_bdevs_operational": 2, 00:20:46.890 "base_bdevs_list": [ 00:20:46.890 { 00:20:46.890 "name": "BaseBdev1", 00:20:46.890 "uuid": "2776be0f-606c-5010-8bd5-c5ac7fd41867", 00:20:46.890 "is_configured": true, 00:20:46.890 "data_offset": 256, 00:20:46.890 "data_size": 7936 00:20:46.890 }, 00:20:46.890 { 00:20:46.890 "name": "BaseBdev2", 00:20:46.890 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:46.890 "is_configured": true, 00:20:46.890 "data_offset": 256, 00:20:46.890 "data_size": 7936 00:20:46.890 } 00:20:46.890 ] 00:20:46.890 }' 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.890 08:54:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.463 [2024-11-20 08:54:18.240877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.463 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:47.722 [2024-11-20 08:54:18.628708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:47.981 /dev/nbd0 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:47.981 1+0 records in 00:20:47.981 1+0 records out 00:20:47.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366592 s, 11.2 MB/s 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:47.981 08:54:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:48.916 7936+0 records in 00:20:48.916 7936+0 records out 00:20:48.916 32505856 bytes (33 MB, 31 MiB) copied, 1.10808 s, 29.3 MB/s 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.916 08:54:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:49.484 [2024-11-20 08:54:20.097717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.484 [2024-11-20 08:54:20.109901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.484 "name": "raid_bdev1", 00:20:49.484 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:49.484 "strip_size_kb": 0, 00:20:49.484 "state": "online", 00:20:49.484 "raid_level": "raid1", 00:20:49.484 "superblock": true, 00:20:49.484 "num_base_bdevs": 2, 00:20:49.484 "num_base_bdevs_discovered": 1, 00:20:49.484 "num_base_bdevs_operational": 1, 00:20:49.484 "base_bdevs_list": [ 00:20:49.484 { 00:20:49.484 "name": null, 00:20:49.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.484 "is_configured": false, 00:20:49.484 "data_offset": 0, 00:20:49.484 "data_size": 7936 00:20:49.484 }, 00:20:49.484 { 00:20:49.484 "name": "BaseBdev2", 00:20:49.484 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:49.484 "is_configured": true, 00:20:49.484 "data_offset": 256, 00:20:49.484 "data_size": 7936 00:20:49.484 } 00:20:49.484 ] 00:20:49.484 }' 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.484 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.743 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.743 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.743 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:49.743 [2024-11-20 08:54:20.630040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.743 [2024-11-20 08:54:20.644663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:49.743 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.743 08:54:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:49.743 [2024-11-20 08:54:20.647382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.122 "name": "raid_bdev1", 00:20:51.122 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:51.122 "strip_size_kb": 0, 00:20:51.122 "state": "online", 00:20:51.122 "raid_level": "raid1", 00:20:51.122 "superblock": true, 00:20:51.122 "num_base_bdevs": 2, 00:20:51.122 "num_base_bdevs_discovered": 2, 00:20:51.122 "num_base_bdevs_operational": 2, 00:20:51.122 "process": { 00:20:51.122 "type": "rebuild", 00:20:51.122 "target": "spare", 00:20:51.122 "progress": { 00:20:51.122 "blocks": 2560, 00:20:51.122 "percent": 32 00:20:51.122 } 00:20:51.122 }, 00:20:51.122 "base_bdevs_list": [ 00:20:51.122 { 00:20:51.122 "name": "spare", 00:20:51.122 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:51.122 "is_configured": true, 00:20:51.122 "data_offset": 256, 00:20:51.122 "data_size": 7936 00:20:51.122 }, 00:20:51.122 { 00:20:51.122 "name": "BaseBdev2", 00:20:51.122 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:51.122 "is_configured": true, 00:20:51.122 "data_offset": 256, 00:20:51.122 "data_size": 7936 00:20:51.122 } 00:20:51.122 ] 00:20:51.122 }' 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.122 [2024-11-20 08:54:21.829026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.122 [2024-11-20 08:54:21.857057] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:51.122 [2024-11-20 08:54:21.857563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.122 [2024-11-20 08:54:21.857618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.122 [2024-11-20 08:54:21.857662] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.122 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.122 "name": "raid_bdev1", 00:20:51.122 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:51.122 "strip_size_kb": 0, 00:20:51.122 "state": "online", 00:20:51.122 "raid_level": "raid1", 00:20:51.122 "superblock": true, 00:20:51.122 "num_base_bdevs": 2, 00:20:51.122 "num_base_bdevs_discovered": 1, 00:20:51.122 "num_base_bdevs_operational": 1, 00:20:51.122 "base_bdevs_list": [ 00:20:51.122 { 00:20:51.122 "name": null, 00:20:51.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.122 "is_configured": false, 00:20:51.122 "data_offset": 0, 00:20:51.122 "data_size": 7936 00:20:51.122 }, 00:20:51.122 { 00:20:51.122 "name": "BaseBdev2", 00:20:51.122 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:51.122 "is_configured": true, 00:20:51.122 "data_offset": 256, 00:20:51.122 "data_size": 7936 00:20:51.122 } 00:20:51.122 ] 00:20:51.122 }' 00:20:51.123 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.123 08:54:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.690 "name": "raid_bdev1", 00:20:51.690 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:51.690 "strip_size_kb": 0, 00:20:51.690 "state": "online", 00:20:51.690 "raid_level": "raid1", 00:20:51.690 "superblock": true, 00:20:51.690 "num_base_bdevs": 2, 00:20:51.690 "num_base_bdevs_discovered": 1, 00:20:51.690 "num_base_bdevs_operational": 1, 00:20:51.690 "base_bdevs_list": [ 00:20:51.690 { 00:20:51.690 "name": null, 00:20:51.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.690 "is_configured": false, 00:20:51.690 "data_offset": 0, 00:20:51.690 "data_size": 7936 00:20:51.690 }, 00:20:51.690 { 00:20:51.690 "name": "BaseBdev2", 00:20:51.690 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:51.690 "is_configured": true, 00:20:51.690 "data_offset": 256, 00:20:51.690 "data_size": 7936 00:20:51.690 } 00:20:51.690 ] 00:20:51.690 }' 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:51.690 [2024-11-20 08:54:22.562188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.690 [2024-11-20 08:54:22.575243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.690 08:54:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:51.690 [2024-11-20 08:54:22.577850] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.068 "name": "raid_bdev1", 00:20:53.068 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:53.068 "strip_size_kb": 0, 00:20:53.068 "state": "online", 00:20:53.068 "raid_level": "raid1", 00:20:53.068 "superblock": true, 00:20:53.068 "num_base_bdevs": 2, 00:20:53.068 "num_base_bdevs_discovered": 2, 00:20:53.068 "num_base_bdevs_operational": 2, 00:20:53.068 "process": { 00:20:53.068 "type": "rebuild", 00:20:53.068 "target": "spare", 00:20:53.068 "progress": { 00:20:53.068 "blocks": 2560, 00:20:53.068 "percent": 32 00:20:53.068 } 00:20:53.068 }, 00:20:53.068 "base_bdevs_list": [ 00:20:53.068 { 00:20:53.068 "name": "spare", 00:20:53.068 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:53.068 "is_configured": true, 00:20:53.068 "data_offset": 256, 00:20:53.068 "data_size": 7936 00:20:53.068 }, 00:20:53.068 { 00:20:53.068 "name": "BaseBdev2", 00:20:53.068 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:53.068 "is_configured": true, 00:20:53.068 "data_offset": 256, 00:20:53.068 "data_size": 7936 00:20:53.068 } 00:20:53.068 ] 00:20:53.068 }' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:53.068 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=764 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.068 "name": "raid_bdev1", 00:20:53.068 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:53.068 "strip_size_kb": 0, 00:20:53.068 "state": "online", 00:20:53.068 "raid_level": "raid1", 00:20:53.068 "superblock": true, 00:20:53.068 "num_base_bdevs": 2, 00:20:53.068 "num_base_bdevs_discovered": 2, 00:20:53.068 "num_base_bdevs_operational": 2, 00:20:53.068 "process": { 00:20:53.068 "type": "rebuild", 00:20:53.068 "target": "spare", 00:20:53.068 "progress": { 00:20:53.068 "blocks": 2816, 00:20:53.068 "percent": 35 00:20:53.068 } 00:20:53.068 }, 00:20:53.068 "base_bdevs_list": [ 00:20:53.068 { 00:20:53.068 "name": "spare", 00:20:53.068 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:53.068 "is_configured": true, 00:20:53.068 "data_offset": 256, 00:20:53.068 "data_size": 7936 00:20:53.068 }, 00:20:53.068 { 00:20:53.068 "name": "BaseBdev2", 00:20:53.068 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:53.068 "is_configured": true, 00:20:53.068 "data_offset": 256, 00:20:53.068 "data_size": 7936 00:20:53.068 } 00:20:53.068 ] 00:20:53.068 }' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.068 08:54:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.004 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.263 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.263 "name": "raid_bdev1", 00:20:54.263 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:54.263 "strip_size_kb": 0, 00:20:54.263 "state": "online", 00:20:54.263 "raid_level": "raid1", 00:20:54.263 "superblock": true, 00:20:54.263 "num_base_bdevs": 2, 00:20:54.263 "num_base_bdevs_discovered": 2, 00:20:54.263 "num_base_bdevs_operational": 2, 00:20:54.263 "process": { 00:20:54.263 "type": "rebuild", 00:20:54.263 "target": "spare", 00:20:54.263 "progress": { 00:20:54.263 "blocks": 5888, 00:20:54.263 "percent": 74 00:20:54.263 } 00:20:54.263 }, 00:20:54.263 "base_bdevs_list": [ 00:20:54.263 { 00:20:54.263 "name": "spare", 00:20:54.263 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:54.263 "is_configured": true, 00:20:54.263 "data_offset": 256, 00:20:54.263 "data_size": 7936 00:20:54.263 }, 00:20:54.263 { 00:20:54.263 "name": "BaseBdev2", 00:20:54.263 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:54.263 "is_configured": true, 00:20:54.263 "data_offset": 256, 00:20:54.263 "data_size": 7936 00:20:54.263 } 00:20:54.263 ] 00:20:54.263 }' 00:20:54.263 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.263 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.263 08:54:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.263 08:54:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.263 08:54:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.831 [2024-11-20 08:54:25.700442] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:54.831 [2024-11-20 08:54:25.700560] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:54.831 [2024-11-20 08:54:25.700725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.403 "name": "raid_bdev1", 00:20:55.403 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:55.403 "strip_size_kb": 0, 00:20:55.403 "state": "online", 00:20:55.403 "raid_level": "raid1", 00:20:55.403 "superblock": true, 00:20:55.403 "num_base_bdevs": 2, 00:20:55.403 "num_base_bdevs_discovered": 2, 00:20:55.403 "num_base_bdevs_operational": 2, 00:20:55.403 "base_bdevs_list": [ 00:20:55.403 { 00:20:55.403 "name": "spare", 00:20:55.403 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:55.403 "is_configured": true, 00:20:55.403 "data_offset": 256, 00:20:55.403 "data_size": 7936 00:20:55.403 }, 00:20:55.403 { 00:20:55.403 "name": "BaseBdev2", 00:20:55.403 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:55.403 "is_configured": true, 00:20:55.403 "data_offset": 256, 00:20:55.403 "data_size": 7936 00:20:55.403 } 00:20:55.403 ] 00:20:55.403 }' 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.403 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.404 "name": "raid_bdev1", 00:20:55.404 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:55.404 "strip_size_kb": 0, 00:20:55.404 "state": "online", 00:20:55.404 "raid_level": "raid1", 00:20:55.404 "superblock": true, 00:20:55.404 "num_base_bdevs": 2, 00:20:55.404 "num_base_bdevs_discovered": 2, 00:20:55.404 "num_base_bdevs_operational": 2, 00:20:55.404 "base_bdevs_list": [ 00:20:55.404 { 00:20:55.404 "name": "spare", 00:20:55.404 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:55.404 "is_configured": true, 00:20:55.404 "data_offset": 256, 00:20:55.404 "data_size": 7936 00:20:55.404 }, 00:20:55.404 { 00:20:55.404 "name": "BaseBdev2", 00:20:55.404 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:55.404 "is_configured": true, 00:20:55.404 "data_offset": 256, 00:20:55.404 "data_size": 7936 00:20:55.404 } 00:20:55.404 ] 00:20:55.404 }' 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.404 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.662 "name": "raid_bdev1", 00:20:55.662 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:55.662 "strip_size_kb": 0, 00:20:55.662 "state": "online", 00:20:55.662 "raid_level": "raid1", 00:20:55.662 "superblock": true, 00:20:55.662 "num_base_bdevs": 2, 00:20:55.662 "num_base_bdevs_discovered": 2, 00:20:55.662 "num_base_bdevs_operational": 2, 00:20:55.662 "base_bdevs_list": [ 00:20:55.662 { 00:20:55.662 "name": "spare", 00:20:55.662 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:55.662 "is_configured": true, 00:20:55.662 "data_offset": 256, 00:20:55.662 "data_size": 7936 00:20:55.662 }, 00:20:55.662 { 00:20:55.662 "name": "BaseBdev2", 00:20:55.662 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:55.662 "is_configured": true, 00:20:55.662 "data_offset": 256, 00:20:55.662 "data_size": 7936 00:20:55.662 } 00:20:55.662 ] 00:20:55.662 }' 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.662 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.229 [2024-11-20 08:54:26.859485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.229 [2024-11-20 08:54:26.859529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.229 [2024-11-20 08:54:26.859648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.229 [2024-11-20 08:54:26.859748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.229 [2024-11-20 08:54:26.859769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.229 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:56.230 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.230 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:56.230 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.230 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:56.230 08:54:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:56.488 /dev/nbd0 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.488 1+0 records in 00:20:56.488 1+0 records out 00:20:56.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326716 s, 12.5 MB/s 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:56.488 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:56.747 /dev/nbd1 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.747 1+0 records in 00:20:56.747 1+0 records out 00:20:56.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501323 s, 8.2 MB/s 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:56.747 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:57.006 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:57.006 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:57.006 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:57.006 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.007 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:57.007 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.007 08:54:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.266 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.525 [2024-11-20 08:54:28.336436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:57.525 [2024-11-20 08:54:28.336647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.525 [2024-11-20 08:54:28.336700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:57.525 [2024-11-20 08:54:28.336721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.525 [2024-11-20 08:54:28.339444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.525 [2024-11-20 08:54:28.339494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:57.525 [2024-11-20 08:54:28.339585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:57.525 [2024-11-20 08:54:28.339681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.525 [2024-11-20 08:54:28.339884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.525 spare 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.525 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.784 [2024-11-20 08:54:28.440003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:57.784 [2024-11-20 08:54:28.440251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:57.784 [2024-11-20 08:54:28.440450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:57.784 [2024-11-20 08:54:28.440810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:57.784 [2024-11-20 08:54:28.440844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:57.784 [2024-11-20 08:54:28.441040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.784 "name": "raid_bdev1", 00:20:57.784 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:57.784 "strip_size_kb": 0, 00:20:57.784 "state": "online", 00:20:57.784 "raid_level": "raid1", 00:20:57.784 "superblock": true, 00:20:57.784 "num_base_bdevs": 2, 00:20:57.784 "num_base_bdevs_discovered": 2, 00:20:57.784 "num_base_bdevs_operational": 2, 00:20:57.784 "base_bdevs_list": [ 00:20:57.784 { 00:20:57.784 "name": "spare", 00:20:57.784 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:57.784 "is_configured": true, 00:20:57.784 "data_offset": 256, 00:20:57.784 "data_size": 7936 00:20:57.784 }, 00:20:57.784 { 00:20:57.784 "name": "BaseBdev2", 00:20:57.784 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:57.784 "is_configured": true, 00:20:57.784 "data_offset": 256, 00:20:57.784 "data_size": 7936 00:20:57.784 } 00:20:57.784 ] 00:20:57.784 }' 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.784 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.043 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.302 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.302 "name": "raid_bdev1", 00:20:58.302 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:58.302 "strip_size_kb": 0, 00:20:58.302 "state": "online", 00:20:58.302 "raid_level": "raid1", 00:20:58.302 "superblock": true, 00:20:58.302 "num_base_bdevs": 2, 00:20:58.302 "num_base_bdevs_discovered": 2, 00:20:58.302 "num_base_bdevs_operational": 2, 00:20:58.302 "base_bdevs_list": [ 00:20:58.302 { 00:20:58.302 "name": "spare", 00:20:58.302 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:58.302 "is_configured": true, 00:20:58.302 "data_offset": 256, 00:20:58.302 "data_size": 7936 00:20:58.302 }, 00:20:58.302 { 00:20:58.302 "name": "BaseBdev2", 00:20:58.302 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:58.302 "is_configured": true, 00:20:58.302 "data_offset": 256, 00:20:58.302 "data_size": 7936 00:20:58.302 } 00:20:58.302 ] 00:20:58.302 }' 00:20:58.302 08:54:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.302 [2024-11-20 08:54:29.133255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.302 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.303 "name": "raid_bdev1", 00:20:58.303 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:58.303 "strip_size_kb": 0, 00:20:58.303 "state": "online", 00:20:58.303 "raid_level": "raid1", 00:20:58.303 "superblock": true, 00:20:58.303 "num_base_bdevs": 2, 00:20:58.303 "num_base_bdevs_discovered": 1, 00:20:58.303 "num_base_bdevs_operational": 1, 00:20:58.303 "base_bdevs_list": [ 00:20:58.303 { 00:20:58.303 "name": null, 00:20:58.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.303 "is_configured": false, 00:20:58.303 "data_offset": 0, 00:20:58.303 "data_size": 7936 00:20:58.303 }, 00:20:58.303 { 00:20:58.303 "name": "BaseBdev2", 00:20:58.303 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:58.303 "is_configured": true, 00:20:58.303 "data_offset": 256, 00:20:58.303 "data_size": 7936 00:20:58.303 } 00:20:58.303 ] 00:20:58.303 }' 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.303 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.870 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.870 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.870 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.870 [2024-11-20 08:54:29.641416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.870 [2024-11-20 08:54:29.641684] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:58.870 [2024-11-20 08:54:29.641715] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:58.870 [2024-11-20 08:54:29.641794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.870 [2024-11-20 08:54:29.654931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:58.870 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.870 08:54:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:58.870 [2024-11-20 08:54:29.657476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.806 "name": "raid_bdev1", 00:20:59.806 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:20:59.806 "strip_size_kb": 0, 00:20:59.806 "state": "online", 00:20:59.806 "raid_level": "raid1", 00:20:59.806 "superblock": true, 00:20:59.806 "num_base_bdevs": 2, 00:20:59.806 "num_base_bdevs_discovered": 2, 00:20:59.806 "num_base_bdevs_operational": 2, 00:20:59.806 "process": { 00:20:59.806 "type": "rebuild", 00:20:59.806 "target": "spare", 00:20:59.806 "progress": { 00:20:59.806 "blocks": 2560, 00:20:59.806 "percent": 32 00:20:59.806 } 00:20:59.806 }, 00:20:59.806 "base_bdevs_list": [ 00:20:59.806 { 00:20:59.806 "name": "spare", 00:20:59.806 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:20:59.806 "is_configured": true, 00:20:59.806 "data_offset": 256, 00:20:59.806 "data_size": 7936 00:20:59.806 }, 00:20:59.806 { 00:20:59.806 "name": "BaseBdev2", 00:20:59.806 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:20:59.806 "is_configured": true, 00:20:59.806 "data_offset": 256, 00:20:59.806 "data_size": 7936 00:20:59.806 } 00:20:59.806 ] 00:20:59.806 }' 00:20:59.806 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.076 [2024-11-20 08:54:30.827814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.076 [2024-11-20 08:54:30.866155] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:00.076 [2024-11-20 08:54:30.866400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.076 [2024-11-20 08:54:30.866434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.076 [2024-11-20 08:54:30.866469] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.076 "name": "raid_bdev1", 00:21:00.076 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:00.076 "strip_size_kb": 0, 00:21:00.076 "state": "online", 00:21:00.076 "raid_level": "raid1", 00:21:00.076 "superblock": true, 00:21:00.076 "num_base_bdevs": 2, 00:21:00.076 "num_base_bdevs_discovered": 1, 00:21:00.076 "num_base_bdevs_operational": 1, 00:21:00.076 "base_bdevs_list": [ 00:21:00.076 { 00:21:00.076 "name": null, 00:21:00.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.076 "is_configured": false, 00:21:00.076 "data_offset": 0, 00:21:00.076 "data_size": 7936 00:21:00.076 }, 00:21:00.076 { 00:21:00.076 "name": "BaseBdev2", 00:21:00.076 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:00.076 "is_configured": true, 00:21:00.076 "data_offset": 256, 00:21:00.076 "data_size": 7936 00:21:00.076 } 00:21:00.076 ] 00:21:00.076 }' 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.076 08:54:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.655 08:54:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:00.655 08:54:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.655 08:54:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.655 [2024-11-20 08:54:31.404681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:00.655 [2024-11-20 08:54:31.404918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.655 [2024-11-20 08:54:31.405007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:00.655 [2024-11-20 08:54:31.405250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.655 [2024-11-20 08:54:31.405635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.655 [2024-11-20 08:54:31.405811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:00.655 [2024-11-20 08:54:31.406033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:00.655 [2024-11-20 08:54:31.406071] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:00.655 [2024-11-20 08:54:31.406089] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:00.655 [2024-11-20 08:54:31.406140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:00.655 spare 00:21:00.655 [2024-11-20 08:54:31.418754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:00.655 08:54:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.655 08:54:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:00.655 [2024-11-20 08:54:31.421275] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.592 "name": "raid_bdev1", 00:21:01.592 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:01.592 "strip_size_kb": 0, 00:21:01.592 "state": "online", 00:21:01.592 "raid_level": "raid1", 00:21:01.592 "superblock": true, 00:21:01.592 "num_base_bdevs": 2, 00:21:01.592 "num_base_bdevs_discovered": 2, 00:21:01.592 "num_base_bdevs_operational": 2, 00:21:01.592 "process": { 00:21:01.592 "type": "rebuild", 00:21:01.592 "target": "spare", 00:21:01.592 "progress": { 00:21:01.592 "blocks": 2560, 00:21:01.592 "percent": 32 00:21:01.592 } 00:21:01.592 }, 00:21:01.592 "base_bdevs_list": [ 00:21:01.592 { 00:21:01.592 "name": "spare", 00:21:01.592 "uuid": "4b519e4d-38da-5273-abeb-9518f6d5e7e5", 00:21:01.592 "is_configured": true, 00:21:01.592 "data_offset": 256, 00:21:01.592 "data_size": 7936 00:21:01.592 }, 00:21:01.592 { 00:21:01.592 "name": "BaseBdev2", 00:21:01.592 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:01.592 "is_configured": true, 00:21:01.592 "data_offset": 256, 00:21:01.592 "data_size": 7936 00:21:01.592 } 00:21:01.592 ] 00:21:01.592 }' 00:21:01.592 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.852 [2024-11-20 08:54:32.579551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:01.852 [2024-11-20 08:54:32.630015] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:01.852 [2024-11-20 08:54:32.630103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.852 [2024-11-20 08:54:32.630140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:01.852 [2024-11-20 08:54:32.630179] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.852 "name": "raid_bdev1", 00:21:01.852 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:01.852 "strip_size_kb": 0, 00:21:01.852 "state": "online", 00:21:01.852 "raid_level": "raid1", 00:21:01.852 "superblock": true, 00:21:01.852 "num_base_bdevs": 2, 00:21:01.852 "num_base_bdevs_discovered": 1, 00:21:01.852 "num_base_bdevs_operational": 1, 00:21:01.852 "base_bdevs_list": [ 00:21:01.852 { 00:21:01.852 "name": null, 00:21:01.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.852 "is_configured": false, 00:21:01.852 "data_offset": 0, 00:21:01.852 "data_size": 7936 00:21:01.852 }, 00:21:01.852 { 00:21:01.852 "name": "BaseBdev2", 00:21:01.852 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:01.852 "is_configured": true, 00:21:01.852 "data_offset": 256, 00:21:01.852 "data_size": 7936 00:21:01.852 } 00:21:01.852 ] 00:21:01.852 }' 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.852 08:54:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.420 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.421 "name": "raid_bdev1", 00:21:02.421 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:02.421 "strip_size_kb": 0, 00:21:02.421 "state": "online", 00:21:02.421 "raid_level": "raid1", 00:21:02.421 "superblock": true, 00:21:02.421 "num_base_bdevs": 2, 00:21:02.421 "num_base_bdevs_discovered": 1, 00:21:02.421 "num_base_bdevs_operational": 1, 00:21:02.421 "base_bdevs_list": [ 00:21:02.421 { 00:21:02.421 "name": null, 00:21:02.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.421 "is_configured": false, 00:21:02.421 "data_offset": 0, 00:21:02.421 "data_size": 7936 00:21:02.421 }, 00:21:02.421 { 00:21:02.421 "name": "BaseBdev2", 00:21:02.421 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:02.421 "is_configured": true, 00:21:02.421 "data_offset": 256, 00:21:02.421 "data_size": 7936 00:21:02.421 } 00:21:02.421 ] 00:21:02.421 }' 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.421 [2024-11-20 08:54:33.324480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:02.421 [2024-11-20 08:54:33.324553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.421 [2024-11-20 08:54:33.324596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:02.421 [2024-11-20 08:54:33.324615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.421 [2024-11-20 08:54:33.324896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.421 [2024-11-20 08:54:33.324922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:02.421 [2024-11-20 08:54:33.324996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:02.421 [2024-11-20 08:54:33.325019] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:02.421 [2024-11-20 08:54:33.325036] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:02.421 [2024-11-20 08:54:33.325052] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:02.421 BaseBdev1 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.421 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.797 "name": "raid_bdev1", 00:21:03.797 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:03.797 "strip_size_kb": 0, 00:21:03.797 "state": "online", 00:21:03.797 "raid_level": "raid1", 00:21:03.797 "superblock": true, 00:21:03.797 "num_base_bdevs": 2, 00:21:03.797 "num_base_bdevs_discovered": 1, 00:21:03.797 "num_base_bdevs_operational": 1, 00:21:03.797 "base_bdevs_list": [ 00:21:03.797 { 00:21:03.797 "name": null, 00:21:03.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.797 "is_configured": false, 00:21:03.797 "data_offset": 0, 00:21:03.797 "data_size": 7936 00:21:03.797 }, 00:21:03.797 { 00:21:03.797 "name": "BaseBdev2", 00:21:03.797 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:03.797 "is_configured": true, 00:21:03.797 "data_offset": 256, 00:21:03.797 "data_size": 7936 00:21:03.797 } 00:21:03.797 ] 00:21:03.797 }' 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.797 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.057 "name": "raid_bdev1", 00:21:04.057 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:04.057 "strip_size_kb": 0, 00:21:04.057 "state": "online", 00:21:04.057 "raid_level": "raid1", 00:21:04.057 "superblock": true, 00:21:04.057 "num_base_bdevs": 2, 00:21:04.057 "num_base_bdevs_discovered": 1, 00:21:04.057 "num_base_bdevs_operational": 1, 00:21:04.057 "base_bdevs_list": [ 00:21:04.057 { 00:21:04.057 "name": null, 00:21:04.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.057 "is_configured": false, 00:21:04.057 "data_offset": 0, 00:21:04.057 "data_size": 7936 00:21:04.057 }, 00:21:04.057 { 00:21:04.057 "name": "BaseBdev2", 00:21:04.057 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:04.057 "is_configured": true, 00:21:04.057 "data_offset": 256, 00:21:04.057 "data_size": 7936 00:21:04.057 } 00:21:04.057 ] 00:21:04.057 }' 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.057 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.317 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.317 [2024-11-20 08:54:35.005014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.317 [2024-11-20 08:54:35.005271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:04.317 [2024-11-20 08:54:35.005301] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:04.317 request: 00:21:04.317 { 00:21:04.317 "base_bdev": "BaseBdev1", 00:21:04.317 "raid_bdev": "raid_bdev1", 00:21:04.317 "method": "bdev_raid_add_base_bdev", 00:21:04.317 "req_id": 1 00:21:04.317 } 00:21:04.317 Got JSON-RPC error response 00:21:04.317 response: 00:21:04.317 { 00:21:04.317 "code": -22, 00:21:04.317 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:04.317 } 00:21:04.317 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:04.317 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:04.317 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.317 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.317 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.317 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:05.253 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:05.253 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.254 "name": "raid_bdev1", 00:21:05.254 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:05.254 "strip_size_kb": 0, 00:21:05.254 "state": "online", 00:21:05.254 "raid_level": "raid1", 00:21:05.254 "superblock": true, 00:21:05.254 "num_base_bdevs": 2, 00:21:05.254 "num_base_bdevs_discovered": 1, 00:21:05.254 "num_base_bdevs_operational": 1, 00:21:05.254 "base_bdevs_list": [ 00:21:05.254 { 00:21:05.254 "name": null, 00:21:05.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.254 "is_configured": false, 00:21:05.254 "data_offset": 0, 00:21:05.254 "data_size": 7936 00:21:05.254 }, 00:21:05.254 { 00:21:05.254 "name": "BaseBdev2", 00:21:05.254 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:05.254 "is_configured": true, 00:21:05.254 "data_offset": 256, 00:21:05.254 "data_size": 7936 00:21:05.254 } 00:21:05.254 ] 00:21:05.254 }' 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.254 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.865 "name": "raid_bdev1", 00:21:05.865 "uuid": "6a88b507-2096-4ab8-b701-26700ca51aac", 00:21:05.865 "strip_size_kb": 0, 00:21:05.865 "state": "online", 00:21:05.865 "raid_level": "raid1", 00:21:05.865 "superblock": true, 00:21:05.865 "num_base_bdevs": 2, 00:21:05.865 "num_base_bdevs_discovered": 1, 00:21:05.865 "num_base_bdevs_operational": 1, 00:21:05.865 "base_bdevs_list": [ 00:21:05.865 { 00:21:05.865 "name": null, 00:21:05.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.865 "is_configured": false, 00:21:05.865 "data_offset": 0, 00:21:05.865 "data_size": 7936 00:21:05.865 }, 00:21:05.865 { 00:21:05.865 "name": "BaseBdev2", 00:21:05.865 "uuid": "8df233d7-d608-5e52-b41c-4ae858a69d4f", 00:21:05.865 "is_configured": true, 00:21:05.865 "data_offset": 256, 00:21:05.865 "data_size": 7936 00:21:05.865 } 00:21:05.865 ] 00:21:05.865 }' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88206 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88206 ']' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88206 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88206 00:21:05.865 killing process with pid 88206 00:21:05.865 Received shutdown signal, test time was about 60.000000 seconds 00:21:05.865 00:21:05.865 Latency(us) 00:21:05.865 [2024-11-20T08:54:36.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.865 [2024-11-20T08:54:36.781Z] =================================================================================================================== 00:21:05.865 [2024-11-20T08:54:36.781Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88206' 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88206 00:21:05.865 [2024-11-20 08:54:36.716088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:05.865 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88206 00:21:05.865 [2024-11-20 08:54:36.716295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.865 [2024-11-20 08:54:36.716381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.865 [2024-11-20 08:54:36.716407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:06.124 [2024-11-20 08:54:37.001471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:07.512 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:07.512 00:21:07.512 real 0m21.581s 00:21:07.512 user 0m28.930s 00:21:07.512 sys 0m2.530s 00:21:07.512 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.512 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.512 ************************************ 00:21:07.512 END TEST raid_rebuild_test_sb_md_separate 00:21:07.512 ************************************ 00:21:07.512 08:54:38 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:07.512 08:54:38 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:07.512 08:54:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:07.512 08:54:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.512 08:54:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.512 ************************************ 00:21:07.512 START TEST raid_state_function_test_sb_md_interleaved 00:21:07.512 ************************************ 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88904 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88904' 00:21:07.512 Process raid pid: 88904 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88904 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88904 ']' 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.512 08:54:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.512 [2024-11-20 08:54:38.196979] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:07.513 [2024-11-20 08:54:38.197422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.513 [2024-11-20 08:54:38.388425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.771 [2024-11-20 08:54:38.549375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.031 [2024-11-20 08:54:38.757380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.031 [2024-11-20 08:54:38.757441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.290 [2024-11-20 08:54:39.184752] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:08.290 [2024-11-20 08:54:39.184822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:08.290 [2024-11-20 08:54:39.184844] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:08.290 [2024-11-20 08:54:39.184865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.290 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.547 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.547 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.547 "name": "Existed_Raid", 00:21:08.547 "uuid": "7d725599-5d74-4768-9971-f4deff93b1db", 00:21:08.547 "strip_size_kb": 0, 00:21:08.547 "state": "configuring", 00:21:08.548 "raid_level": "raid1", 00:21:08.548 "superblock": true, 00:21:08.548 "num_base_bdevs": 2, 00:21:08.548 "num_base_bdevs_discovered": 0, 00:21:08.548 "num_base_bdevs_operational": 2, 00:21:08.548 "base_bdevs_list": [ 00:21:08.548 { 00:21:08.548 "name": "BaseBdev1", 00:21:08.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.548 "is_configured": false, 00:21:08.548 "data_offset": 0, 00:21:08.548 "data_size": 0 00:21:08.548 }, 00:21:08.548 { 00:21:08.548 "name": "BaseBdev2", 00:21:08.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.548 "is_configured": false, 00:21:08.548 "data_offset": 0, 00:21:08.548 "data_size": 0 00:21:08.548 } 00:21:08.548 ] 00:21:08.548 }' 00:21:08.548 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.548 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.806 [2024-11-20 08:54:39.692826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:08.806 [2024-11-20 08:54:39.692874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.806 [2024-11-20 08:54:39.700805] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:08.806 [2024-11-20 08:54:39.700863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:08.806 [2024-11-20 08:54:39.700883] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:08.806 [2024-11-20 08:54:39.700906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:08.806 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.807 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.066 [2024-11-20 08:54:39.745528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.066 BaseBdev1 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.066 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.066 [ 00:21:09.066 { 00:21:09.067 "name": "BaseBdev1", 00:21:09.067 "aliases": [ 00:21:09.067 "27f2f85f-e057-402c-99dc-c29a9a0be980" 00:21:09.067 ], 00:21:09.067 "product_name": "Malloc disk", 00:21:09.067 "block_size": 4128, 00:21:09.067 "num_blocks": 8192, 00:21:09.067 "uuid": "27f2f85f-e057-402c-99dc-c29a9a0be980", 00:21:09.067 "md_size": 32, 00:21:09.067 "md_interleave": true, 00:21:09.067 "dif_type": 0, 00:21:09.067 "assigned_rate_limits": { 00:21:09.067 "rw_ios_per_sec": 0, 00:21:09.067 "rw_mbytes_per_sec": 0, 00:21:09.067 "r_mbytes_per_sec": 0, 00:21:09.067 "w_mbytes_per_sec": 0 00:21:09.067 }, 00:21:09.067 "claimed": true, 00:21:09.067 "claim_type": "exclusive_write", 00:21:09.067 "zoned": false, 00:21:09.067 "supported_io_types": { 00:21:09.067 "read": true, 00:21:09.067 "write": true, 00:21:09.067 "unmap": true, 00:21:09.067 "flush": true, 00:21:09.067 "reset": true, 00:21:09.067 "nvme_admin": false, 00:21:09.067 "nvme_io": false, 00:21:09.067 "nvme_io_md": false, 00:21:09.067 "write_zeroes": true, 00:21:09.067 "zcopy": true, 00:21:09.067 "get_zone_info": false, 00:21:09.067 "zone_management": false, 00:21:09.067 "zone_append": false, 00:21:09.067 "compare": false, 00:21:09.067 "compare_and_write": false, 00:21:09.067 "abort": true, 00:21:09.067 "seek_hole": false, 00:21:09.067 "seek_data": false, 00:21:09.067 "copy": true, 00:21:09.067 "nvme_iov_md": false 00:21:09.067 }, 00:21:09.067 "memory_domains": [ 00:21:09.067 { 00:21:09.067 "dma_device_id": "system", 00:21:09.067 "dma_device_type": 1 00:21:09.067 }, 00:21:09.067 { 00:21:09.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.067 "dma_device_type": 2 00:21:09.067 } 00:21:09.067 ], 00:21:09.067 "driver_specific": {} 00:21:09.067 } 00:21:09.067 ] 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.067 "name": "Existed_Raid", 00:21:09.067 "uuid": "0154d729-b4c0-4c7d-83c1-c149d7121f9d", 00:21:09.067 "strip_size_kb": 0, 00:21:09.067 "state": "configuring", 00:21:09.067 "raid_level": "raid1", 00:21:09.067 "superblock": true, 00:21:09.067 "num_base_bdevs": 2, 00:21:09.067 "num_base_bdevs_discovered": 1, 00:21:09.067 "num_base_bdevs_operational": 2, 00:21:09.067 "base_bdevs_list": [ 00:21:09.067 { 00:21:09.067 "name": "BaseBdev1", 00:21:09.067 "uuid": "27f2f85f-e057-402c-99dc-c29a9a0be980", 00:21:09.067 "is_configured": true, 00:21:09.067 "data_offset": 256, 00:21:09.067 "data_size": 7936 00:21:09.067 }, 00:21:09.067 { 00:21:09.067 "name": "BaseBdev2", 00:21:09.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.067 "is_configured": false, 00:21:09.067 "data_offset": 0, 00:21:09.067 "data_size": 0 00:21:09.067 } 00:21:09.067 ] 00:21:09.067 }' 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.067 08:54:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.635 [2024-11-20 08:54:40.289758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:09.635 [2024-11-20 08:54:40.289823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.635 [2024-11-20 08:54:40.301837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.635 [2024-11-20 08:54:40.304462] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:09.635 [2024-11-20 08:54:40.304658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:09.635 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.636 "name": "Existed_Raid", 00:21:09.636 "uuid": "f93a7c32-16aa-424c-9c41-94a9e1ca7d4c", 00:21:09.636 "strip_size_kb": 0, 00:21:09.636 "state": "configuring", 00:21:09.636 "raid_level": "raid1", 00:21:09.636 "superblock": true, 00:21:09.636 "num_base_bdevs": 2, 00:21:09.636 "num_base_bdevs_discovered": 1, 00:21:09.636 "num_base_bdevs_operational": 2, 00:21:09.636 "base_bdevs_list": [ 00:21:09.636 { 00:21:09.636 "name": "BaseBdev1", 00:21:09.636 "uuid": "27f2f85f-e057-402c-99dc-c29a9a0be980", 00:21:09.636 "is_configured": true, 00:21:09.636 "data_offset": 256, 00:21:09.636 "data_size": 7936 00:21:09.636 }, 00:21:09.636 { 00:21:09.636 "name": "BaseBdev2", 00:21:09.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.636 "is_configured": false, 00:21:09.636 "data_offset": 0, 00:21:09.636 "data_size": 0 00:21:09.636 } 00:21:09.636 ] 00:21:09.636 }' 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.636 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 [2024-11-20 08:54:40.892910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.205 [2024-11-20 08:54:40.893254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:10.205 [2024-11-20 08:54:40.893277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:10.205 [2024-11-20 08:54:40.893396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:10.205 [2024-11-20 08:54:40.893529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:10.205 [2024-11-20 08:54:40.893552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:10.205 [2024-11-20 08:54:40.893665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.205 BaseBdev2 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.205 [ 00:21:10.205 { 00:21:10.205 "name": "BaseBdev2", 00:21:10.205 "aliases": [ 00:21:10.205 "b59a49c6-8214-4f00-b655-5c43ef212288" 00:21:10.205 ], 00:21:10.205 "product_name": "Malloc disk", 00:21:10.205 "block_size": 4128, 00:21:10.205 "num_blocks": 8192, 00:21:10.205 "uuid": "b59a49c6-8214-4f00-b655-5c43ef212288", 00:21:10.205 "md_size": 32, 00:21:10.205 "md_interleave": true, 00:21:10.205 "dif_type": 0, 00:21:10.205 "assigned_rate_limits": { 00:21:10.205 "rw_ios_per_sec": 0, 00:21:10.205 "rw_mbytes_per_sec": 0, 00:21:10.205 "r_mbytes_per_sec": 0, 00:21:10.205 "w_mbytes_per_sec": 0 00:21:10.205 }, 00:21:10.205 "claimed": true, 00:21:10.205 "claim_type": "exclusive_write", 00:21:10.205 "zoned": false, 00:21:10.205 "supported_io_types": { 00:21:10.205 "read": true, 00:21:10.205 "write": true, 00:21:10.205 "unmap": true, 00:21:10.205 "flush": true, 00:21:10.205 "reset": true, 00:21:10.205 "nvme_admin": false, 00:21:10.205 "nvme_io": false, 00:21:10.205 "nvme_io_md": false, 00:21:10.205 "write_zeroes": true, 00:21:10.205 "zcopy": true, 00:21:10.205 "get_zone_info": false, 00:21:10.205 "zone_management": false, 00:21:10.205 "zone_append": false, 00:21:10.205 "compare": false, 00:21:10.205 "compare_and_write": false, 00:21:10.205 "abort": true, 00:21:10.205 "seek_hole": false, 00:21:10.205 "seek_data": false, 00:21:10.205 "copy": true, 00:21:10.205 "nvme_iov_md": false 00:21:10.205 }, 00:21:10.205 "memory_domains": [ 00:21:10.205 { 00:21:10.205 "dma_device_id": "system", 00:21:10.205 "dma_device_type": 1 00:21:10.205 }, 00:21:10.205 { 00:21:10.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.205 "dma_device_type": 2 00:21:10.205 } 00:21:10.205 ], 00:21:10.205 "driver_specific": {} 00:21:10.205 } 00:21:10.205 ] 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.205 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.206 "name": "Existed_Raid", 00:21:10.206 "uuid": "f93a7c32-16aa-424c-9c41-94a9e1ca7d4c", 00:21:10.206 "strip_size_kb": 0, 00:21:10.206 "state": "online", 00:21:10.206 "raid_level": "raid1", 00:21:10.206 "superblock": true, 00:21:10.206 "num_base_bdevs": 2, 00:21:10.206 "num_base_bdevs_discovered": 2, 00:21:10.206 "num_base_bdevs_operational": 2, 00:21:10.206 "base_bdevs_list": [ 00:21:10.206 { 00:21:10.206 "name": "BaseBdev1", 00:21:10.206 "uuid": "27f2f85f-e057-402c-99dc-c29a9a0be980", 00:21:10.206 "is_configured": true, 00:21:10.206 "data_offset": 256, 00:21:10.206 "data_size": 7936 00:21:10.206 }, 00:21:10.206 { 00:21:10.206 "name": "BaseBdev2", 00:21:10.206 "uuid": "b59a49c6-8214-4f00-b655-5c43ef212288", 00:21:10.206 "is_configured": true, 00:21:10.206 "data_offset": 256, 00:21:10.206 "data_size": 7936 00:21:10.206 } 00:21:10.206 ] 00:21:10.206 }' 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.206 08:54:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.775 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:10.775 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:10.775 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:10.775 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:10.775 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:10.775 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:10.776 [2024-11-20 08:54:41.453515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.776 "name": "Existed_Raid", 00:21:10.776 "aliases": [ 00:21:10.776 "f93a7c32-16aa-424c-9c41-94a9e1ca7d4c" 00:21:10.776 ], 00:21:10.776 "product_name": "Raid Volume", 00:21:10.776 "block_size": 4128, 00:21:10.776 "num_blocks": 7936, 00:21:10.776 "uuid": "f93a7c32-16aa-424c-9c41-94a9e1ca7d4c", 00:21:10.776 "md_size": 32, 00:21:10.776 "md_interleave": true, 00:21:10.776 "dif_type": 0, 00:21:10.776 "assigned_rate_limits": { 00:21:10.776 "rw_ios_per_sec": 0, 00:21:10.776 "rw_mbytes_per_sec": 0, 00:21:10.776 "r_mbytes_per_sec": 0, 00:21:10.776 "w_mbytes_per_sec": 0 00:21:10.776 }, 00:21:10.776 "claimed": false, 00:21:10.776 "zoned": false, 00:21:10.776 "supported_io_types": { 00:21:10.776 "read": true, 00:21:10.776 "write": true, 00:21:10.776 "unmap": false, 00:21:10.776 "flush": false, 00:21:10.776 "reset": true, 00:21:10.776 "nvme_admin": false, 00:21:10.776 "nvme_io": false, 00:21:10.776 "nvme_io_md": false, 00:21:10.776 "write_zeroes": true, 00:21:10.776 "zcopy": false, 00:21:10.776 "get_zone_info": false, 00:21:10.776 "zone_management": false, 00:21:10.776 "zone_append": false, 00:21:10.776 "compare": false, 00:21:10.776 "compare_and_write": false, 00:21:10.776 "abort": false, 00:21:10.776 "seek_hole": false, 00:21:10.776 "seek_data": false, 00:21:10.776 "copy": false, 00:21:10.776 "nvme_iov_md": false 00:21:10.776 }, 00:21:10.776 "memory_domains": [ 00:21:10.776 { 00:21:10.776 "dma_device_id": "system", 00:21:10.776 "dma_device_type": 1 00:21:10.776 }, 00:21:10.776 { 00:21:10.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.776 "dma_device_type": 2 00:21:10.776 }, 00:21:10.776 { 00:21:10.776 "dma_device_id": "system", 00:21:10.776 "dma_device_type": 1 00:21:10.776 }, 00:21:10.776 { 00:21:10.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.776 "dma_device_type": 2 00:21:10.776 } 00:21:10.776 ], 00:21:10.776 "driver_specific": { 00:21:10.776 "raid": { 00:21:10.776 "uuid": "f93a7c32-16aa-424c-9c41-94a9e1ca7d4c", 00:21:10.776 "strip_size_kb": 0, 00:21:10.776 "state": "online", 00:21:10.776 "raid_level": "raid1", 00:21:10.776 "superblock": true, 00:21:10.776 "num_base_bdevs": 2, 00:21:10.776 "num_base_bdevs_discovered": 2, 00:21:10.776 "num_base_bdevs_operational": 2, 00:21:10.776 "base_bdevs_list": [ 00:21:10.776 { 00:21:10.776 "name": "BaseBdev1", 00:21:10.776 "uuid": "27f2f85f-e057-402c-99dc-c29a9a0be980", 00:21:10.776 "is_configured": true, 00:21:10.776 "data_offset": 256, 00:21:10.776 "data_size": 7936 00:21:10.776 }, 00:21:10.776 { 00:21:10.776 "name": "BaseBdev2", 00:21:10.776 "uuid": "b59a49c6-8214-4f00-b655-5c43ef212288", 00:21:10.776 "is_configured": true, 00:21:10.776 "data_offset": 256, 00:21:10.776 "data_size": 7936 00:21:10.776 } 00:21:10.776 ] 00:21:10.776 } 00:21:10.776 } 00:21:10.776 }' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:10.776 BaseBdev2' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.776 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.036 [2024-11-20 08:54:41.705250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.036 "name": "Existed_Raid", 00:21:11.036 "uuid": "f93a7c32-16aa-424c-9c41-94a9e1ca7d4c", 00:21:11.036 "strip_size_kb": 0, 00:21:11.036 "state": "online", 00:21:11.036 "raid_level": "raid1", 00:21:11.036 "superblock": true, 00:21:11.036 "num_base_bdevs": 2, 00:21:11.036 "num_base_bdevs_discovered": 1, 00:21:11.036 "num_base_bdevs_operational": 1, 00:21:11.036 "base_bdevs_list": [ 00:21:11.036 { 00:21:11.036 "name": null, 00:21:11.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.036 "is_configured": false, 00:21:11.036 "data_offset": 0, 00:21:11.036 "data_size": 7936 00:21:11.036 }, 00:21:11.036 { 00:21:11.036 "name": "BaseBdev2", 00:21:11.036 "uuid": "b59a49c6-8214-4f00-b655-5c43ef212288", 00:21:11.036 "is_configured": true, 00:21:11.036 "data_offset": 256, 00:21:11.036 "data_size": 7936 00:21:11.036 } 00:21:11.036 ] 00:21:11.036 }' 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.036 08:54:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.603 [2024-11-20 08:54:42.332316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.603 [2024-11-20 08:54:42.332618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.603 [2024-11-20 08:54:42.416992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.603 [2024-11-20 08:54:42.417303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.603 [2024-11-20 08:54:42.417344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88904 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88904 ']' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88904 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88904 00:21:11.603 killing process with pid 88904 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88904' 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88904 00:21:11.603 [2024-11-20 08:54:42.504342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.603 08:54:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88904 00:21:11.862 [2024-11-20 08:54:42.519052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.798 08:54:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:12.798 00:21:12.798 real 0m5.445s 00:21:12.798 user 0m8.228s 00:21:12.798 sys 0m0.782s 00:21:12.798 08:54:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.798 ************************************ 00:21:12.798 END TEST raid_state_function_test_sb_md_interleaved 00:21:12.798 ************************************ 00:21:12.799 08:54:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.799 08:54:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:12.799 08:54:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:12.799 08:54:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.799 08:54:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.799 ************************************ 00:21:12.799 START TEST raid_superblock_test_md_interleaved 00:21:12.799 ************************************ 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89162 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89162 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89162 ']' 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:12.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.799 08:54:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:12.799 [2024-11-20 08:54:43.687107] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:12.799 [2024-11-20 08:54:43.687803] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89162 ] 00:21:13.058 [2024-11-20 08:54:43.870378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.317 [2024-11-20 08:54:43.997825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.317 [2024-11-20 08:54:44.202785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.317 [2024-11-20 08:54:44.202866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 malloc1 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.885 [2024-11-20 08:54:44.776549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:13.885 [2024-11-20 08:54:44.776633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.885 [2024-11-20 08:54:44.776663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:13.885 [2024-11-20 08:54:44.776677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.885 [2024-11-20 08:54:44.779421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.885 [2024-11-20 08:54:44.779465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:13.885 pt1 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.885 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 malloc2 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 [2024-11-20 08:54:44.832842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:14.144 [2024-11-20 08:54:44.832907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.144 [2024-11-20 08:54:44.832939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:14.144 [2024-11-20 08:54:44.832953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.144 [2024-11-20 08:54:44.835399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.144 [2024-11-20 08:54:44.835440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:14.144 pt2 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 [2024-11-20 08:54:44.844894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:14.144 [2024-11-20 08:54:44.847286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:14.144 [2024-11-20 08:54:44.847591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:14.144 [2024-11-20 08:54:44.847622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:14.144 [2024-11-20 08:54:44.847720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:14.144 [2024-11-20 08:54:44.847821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:14.144 [2024-11-20 08:54:44.847841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:14.144 [2024-11-20 08:54:44.847936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.144 "name": "raid_bdev1", 00:21:14.144 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:14.144 "strip_size_kb": 0, 00:21:14.144 "state": "online", 00:21:14.144 "raid_level": "raid1", 00:21:14.144 "superblock": true, 00:21:14.144 "num_base_bdevs": 2, 00:21:14.144 "num_base_bdevs_discovered": 2, 00:21:14.144 "num_base_bdevs_operational": 2, 00:21:14.144 "base_bdevs_list": [ 00:21:14.144 { 00:21:14.144 "name": "pt1", 00:21:14.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:14.144 "is_configured": true, 00:21:14.144 "data_offset": 256, 00:21:14.144 "data_size": 7936 00:21:14.144 }, 00:21:14.144 { 00:21:14.144 "name": "pt2", 00:21:14.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.144 "is_configured": true, 00:21:14.144 "data_offset": 256, 00:21:14.144 "data_size": 7936 00:21:14.144 } 00:21:14.144 ] 00:21:14.144 }' 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.144 08:54:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.403 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:14.403 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:14.403 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:14.403 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:14.403 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:14.403 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:14.662 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:14.662 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.662 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:14.662 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.662 [2024-11-20 08:54:45.321357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.662 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.662 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:14.662 "name": "raid_bdev1", 00:21:14.662 "aliases": [ 00:21:14.662 "91729069-3b3f-483d-a331-f6178e1adda2" 00:21:14.662 ], 00:21:14.662 "product_name": "Raid Volume", 00:21:14.662 "block_size": 4128, 00:21:14.662 "num_blocks": 7936, 00:21:14.662 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:14.662 "md_size": 32, 00:21:14.662 "md_interleave": true, 00:21:14.662 "dif_type": 0, 00:21:14.662 "assigned_rate_limits": { 00:21:14.662 "rw_ios_per_sec": 0, 00:21:14.662 "rw_mbytes_per_sec": 0, 00:21:14.662 "r_mbytes_per_sec": 0, 00:21:14.662 "w_mbytes_per_sec": 0 00:21:14.662 }, 00:21:14.662 "claimed": false, 00:21:14.662 "zoned": false, 00:21:14.662 "supported_io_types": { 00:21:14.662 "read": true, 00:21:14.662 "write": true, 00:21:14.662 "unmap": false, 00:21:14.662 "flush": false, 00:21:14.662 "reset": true, 00:21:14.662 "nvme_admin": false, 00:21:14.662 "nvme_io": false, 00:21:14.662 "nvme_io_md": false, 00:21:14.662 "write_zeroes": true, 00:21:14.662 "zcopy": false, 00:21:14.662 "get_zone_info": false, 00:21:14.662 "zone_management": false, 00:21:14.662 "zone_append": false, 00:21:14.662 "compare": false, 00:21:14.662 "compare_and_write": false, 00:21:14.662 "abort": false, 00:21:14.662 "seek_hole": false, 00:21:14.662 "seek_data": false, 00:21:14.662 "copy": false, 00:21:14.662 "nvme_iov_md": false 00:21:14.662 }, 00:21:14.662 "memory_domains": [ 00:21:14.662 { 00:21:14.662 "dma_device_id": "system", 00:21:14.662 "dma_device_type": 1 00:21:14.662 }, 00:21:14.662 { 00:21:14.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.663 "dma_device_type": 2 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "dma_device_id": "system", 00:21:14.663 "dma_device_type": 1 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.663 "dma_device_type": 2 00:21:14.663 } 00:21:14.663 ], 00:21:14.663 "driver_specific": { 00:21:14.663 "raid": { 00:21:14.663 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:14.663 "strip_size_kb": 0, 00:21:14.663 "state": "online", 00:21:14.663 "raid_level": "raid1", 00:21:14.663 "superblock": true, 00:21:14.663 "num_base_bdevs": 2, 00:21:14.663 "num_base_bdevs_discovered": 2, 00:21:14.663 "num_base_bdevs_operational": 2, 00:21:14.663 "base_bdevs_list": [ 00:21:14.663 { 00:21:14.663 "name": "pt1", 00:21:14.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:14.663 "is_configured": true, 00:21:14.663 "data_offset": 256, 00:21:14.663 "data_size": 7936 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "name": "pt2", 00:21:14.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.663 "is_configured": true, 00:21:14.663 "data_offset": 256, 00:21:14.663 "data_size": 7936 00:21:14.663 } 00:21:14.663 ] 00:21:14.663 } 00:21:14.663 } 00:21:14.663 }' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:14.663 pt2' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.663 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:14.663 [2024-11-20 08:54:45.569361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=91729069-3b3f-483d-a331-f6178e1adda2 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 91729069-3b3f-483d-a331-f6178e1adda2 ']' 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 [2024-11-20 08:54:45.621016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.923 [2024-11-20 08:54:45.621053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.923 [2024-11-20 08:54:45.621177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.923 [2024-11-20 08:54:45.621257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.923 [2024-11-20 08:54:45.621276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 [2024-11-20 08:54:45.761102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:14.923 [2024-11-20 08:54:45.763566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:14.923 [2024-11-20 08:54:45.763667] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:14.923 [2024-11-20 08:54:45.763743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:14.923 [2024-11-20 08:54:45.763770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.923 [2024-11-20 08:54:45.763786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:14.923 request: 00:21:14.923 { 00:21:14.923 "name": "raid_bdev1", 00:21:14.923 "raid_level": "raid1", 00:21:14.923 "base_bdevs": [ 00:21:14.923 "malloc1", 00:21:14.923 "malloc2" 00:21:14.923 ], 00:21:14.923 "superblock": false, 00:21:14.923 "method": "bdev_raid_create", 00:21:14.923 "req_id": 1 00:21:14.923 } 00:21:14.923 Got JSON-RPC error response 00:21:14.923 response: 00:21:14.923 { 00:21:14.923 "code": -17, 00:21:14.923 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:14.923 } 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:14.923 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:14.924 [2024-11-20 08:54:45.821073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:14.924 [2024-11-20 08:54:45.821141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.924 [2024-11-20 08:54:45.821182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:14.924 [2024-11-20 08:54:45.821200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.924 [2024-11-20 08:54:45.823661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.924 [2024-11-20 08:54:45.823705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:14.924 [2024-11-20 08:54:45.823771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:14.924 [2024-11-20 08:54:45.823851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:14.924 pt1 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.924 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.182 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.182 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.182 "name": "raid_bdev1", 00:21:15.182 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:15.182 "strip_size_kb": 0, 00:21:15.182 "state": "configuring", 00:21:15.182 "raid_level": "raid1", 00:21:15.182 "superblock": true, 00:21:15.182 "num_base_bdevs": 2, 00:21:15.183 "num_base_bdevs_discovered": 1, 00:21:15.183 "num_base_bdevs_operational": 2, 00:21:15.183 "base_bdevs_list": [ 00:21:15.183 { 00:21:15.183 "name": "pt1", 00:21:15.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:15.183 "is_configured": true, 00:21:15.183 "data_offset": 256, 00:21:15.183 "data_size": 7936 00:21:15.183 }, 00:21:15.183 { 00:21:15.183 "name": null, 00:21:15.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.183 "is_configured": false, 00:21:15.183 "data_offset": 256, 00:21:15.183 "data_size": 7936 00:21:15.183 } 00:21:15.183 ] 00:21:15.183 }' 00:21:15.183 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.183 08:54:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.441 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:15.441 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:15.441 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:15.441 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:15.441 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.441 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.441 [2024-11-20 08:54:46.317223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:15.441 [2024-11-20 08:54:46.317313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.441 [2024-11-20 08:54:46.317344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:15.441 [2024-11-20 08:54:46.317361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.441 [2024-11-20 08:54:46.317579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.441 [2024-11-20 08:54:46.317606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:15.441 [2024-11-20 08:54:46.317671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:15.441 [2024-11-20 08:54:46.317709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:15.441 [2024-11-20 08:54:46.317823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:15.441 [2024-11-20 08:54:46.317844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:15.441 [2024-11-20 08:54:46.317929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:15.441 [2024-11-20 08:54:46.318027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:15.442 [2024-11-20 08:54:46.318042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:15.442 [2024-11-20 08:54:46.318127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.442 pt2 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.442 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.701 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.701 "name": "raid_bdev1", 00:21:15.701 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:15.701 "strip_size_kb": 0, 00:21:15.701 "state": "online", 00:21:15.701 "raid_level": "raid1", 00:21:15.701 "superblock": true, 00:21:15.701 "num_base_bdevs": 2, 00:21:15.701 "num_base_bdevs_discovered": 2, 00:21:15.701 "num_base_bdevs_operational": 2, 00:21:15.701 "base_bdevs_list": [ 00:21:15.701 { 00:21:15.701 "name": "pt1", 00:21:15.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:15.701 "is_configured": true, 00:21:15.701 "data_offset": 256, 00:21:15.701 "data_size": 7936 00:21:15.701 }, 00:21:15.701 { 00:21:15.701 "name": "pt2", 00:21:15.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.701 "is_configured": true, 00:21:15.701 "data_offset": 256, 00:21:15.701 "data_size": 7936 00:21:15.701 } 00:21:15.701 ] 00:21:15.701 }' 00:21:15.701 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.701 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:15.960 [2024-11-20 08:54:46.809668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.960 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.960 "name": "raid_bdev1", 00:21:15.960 "aliases": [ 00:21:15.960 "91729069-3b3f-483d-a331-f6178e1adda2" 00:21:15.960 ], 00:21:15.960 "product_name": "Raid Volume", 00:21:15.960 "block_size": 4128, 00:21:15.960 "num_blocks": 7936, 00:21:15.960 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:15.960 "md_size": 32, 00:21:15.960 "md_interleave": true, 00:21:15.960 "dif_type": 0, 00:21:15.960 "assigned_rate_limits": { 00:21:15.960 "rw_ios_per_sec": 0, 00:21:15.960 "rw_mbytes_per_sec": 0, 00:21:15.960 "r_mbytes_per_sec": 0, 00:21:15.960 "w_mbytes_per_sec": 0 00:21:15.960 }, 00:21:15.960 "claimed": false, 00:21:15.960 "zoned": false, 00:21:15.960 "supported_io_types": { 00:21:15.960 "read": true, 00:21:15.960 "write": true, 00:21:15.960 "unmap": false, 00:21:15.960 "flush": false, 00:21:15.960 "reset": true, 00:21:15.960 "nvme_admin": false, 00:21:15.960 "nvme_io": false, 00:21:15.960 "nvme_io_md": false, 00:21:15.960 "write_zeroes": true, 00:21:15.960 "zcopy": false, 00:21:15.960 "get_zone_info": false, 00:21:15.960 "zone_management": false, 00:21:15.960 "zone_append": false, 00:21:15.960 "compare": false, 00:21:15.960 "compare_and_write": false, 00:21:15.961 "abort": false, 00:21:15.961 "seek_hole": false, 00:21:15.961 "seek_data": false, 00:21:15.961 "copy": false, 00:21:15.961 "nvme_iov_md": false 00:21:15.961 }, 00:21:15.961 "memory_domains": [ 00:21:15.961 { 00:21:15.961 "dma_device_id": "system", 00:21:15.961 "dma_device_type": 1 00:21:15.961 }, 00:21:15.961 { 00:21:15.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.961 "dma_device_type": 2 00:21:15.961 }, 00:21:15.961 { 00:21:15.961 "dma_device_id": "system", 00:21:15.961 "dma_device_type": 1 00:21:15.961 }, 00:21:15.961 { 00:21:15.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.961 "dma_device_type": 2 00:21:15.961 } 00:21:15.961 ], 00:21:15.961 "driver_specific": { 00:21:15.961 "raid": { 00:21:15.961 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:15.961 "strip_size_kb": 0, 00:21:15.961 "state": "online", 00:21:15.961 "raid_level": "raid1", 00:21:15.961 "superblock": true, 00:21:15.961 "num_base_bdevs": 2, 00:21:15.961 "num_base_bdevs_discovered": 2, 00:21:15.961 "num_base_bdevs_operational": 2, 00:21:15.961 "base_bdevs_list": [ 00:21:15.961 { 00:21:15.961 "name": "pt1", 00:21:15.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:15.961 "is_configured": true, 00:21:15.961 "data_offset": 256, 00:21:15.961 "data_size": 7936 00:21:15.961 }, 00:21:15.961 { 00:21:15.961 "name": "pt2", 00:21:15.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.961 "is_configured": true, 00:21:15.961 "data_offset": 256, 00:21:15.961 "data_size": 7936 00:21:15.961 } 00:21:15.961 ] 00:21:15.961 } 00:21:15.961 } 00:21:15.961 }' 00:21:15.961 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:16.219 pt2' 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.219 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.220 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:16.220 08:54:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.220 [2024-11-20 08:54:47.065765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 91729069-3b3f-483d-a331-f6178e1adda2 '!=' 91729069-3b3f-483d-a331-f6178e1adda2 ']' 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.220 [2024-11-20 08:54:47.117469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.220 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.479 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.479 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.479 "name": "raid_bdev1", 00:21:16.479 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:16.479 "strip_size_kb": 0, 00:21:16.479 "state": "online", 00:21:16.479 "raid_level": "raid1", 00:21:16.479 "superblock": true, 00:21:16.479 "num_base_bdevs": 2, 00:21:16.479 "num_base_bdevs_discovered": 1, 00:21:16.479 "num_base_bdevs_operational": 1, 00:21:16.479 "base_bdevs_list": [ 00:21:16.479 { 00:21:16.479 "name": null, 00:21:16.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.479 "is_configured": false, 00:21:16.479 "data_offset": 0, 00:21:16.479 "data_size": 7936 00:21:16.479 }, 00:21:16.479 { 00:21:16.479 "name": "pt2", 00:21:16.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.480 "is_configured": true, 00:21:16.480 "data_offset": 256, 00:21:16.480 "data_size": 7936 00:21:16.480 } 00:21:16.480 ] 00:21:16.480 }' 00:21:16.480 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.480 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.739 [2024-11-20 08:54:47.617678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.739 [2024-11-20 08:54:47.617749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.739 [2024-11-20 08:54:47.617887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.739 [2024-11-20 08:54:47.617983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.739 [2024-11-20 08:54:47.618009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:16.739 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.034 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.034 [2024-11-20 08:54:47.693646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:17.035 [2024-11-20 08:54:47.693751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.035 [2024-11-20 08:54:47.693782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:17.035 [2024-11-20 08:54:47.693804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.035 [2024-11-20 08:54:47.697124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.035 [2024-11-20 08:54:47.697206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:17.035 [2024-11-20 08:54:47.697299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:17.035 [2024-11-20 08:54:47.697382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.035 [2024-11-20 08:54:47.697498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:17.035 [2024-11-20 08:54:47.697525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:17.035 [2024-11-20 08:54:47.697658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:17.035 [2024-11-20 08:54:47.697791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:17.035 [2024-11-20 08:54:47.697809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:17.035 [2024-11-20 08:54:47.697973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.035 pt2 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.035 "name": "raid_bdev1", 00:21:17.035 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:17.035 "strip_size_kb": 0, 00:21:17.035 "state": "online", 00:21:17.035 "raid_level": "raid1", 00:21:17.035 "superblock": true, 00:21:17.035 "num_base_bdevs": 2, 00:21:17.035 "num_base_bdevs_discovered": 1, 00:21:17.035 "num_base_bdevs_operational": 1, 00:21:17.035 "base_bdevs_list": [ 00:21:17.035 { 00:21:17.035 "name": null, 00:21:17.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.035 "is_configured": false, 00:21:17.035 "data_offset": 256, 00:21:17.035 "data_size": 7936 00:21:17.035 }, 00:21:17.035 { 00:21:17.035 "name": "pt2", 00:21:17.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.035 "is_configured": true, 00:21:17.035 "data_offset": 256, 00:21:17.035 "data_size": 7936 00:21:17.035 } 00:21:17.035 ] 00:21:17.035 }' 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.035 08:54:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.325 [2024-11-20 08:54:48.229809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.325 [2024-11-20 08:54:48.229923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.325 [2024-11-20 08:54:48.230036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.325 [2024-11-20 08:54:48.230114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.325 [2024-11-20 08:54:48.230131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.325 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.584 [2024-11-20 08:54:48.305845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:17.584 [2024-11-20 08:54:48.305951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.584 [2024-11-20 08:54:48.305988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:17.584 [2024-11-20 08:54:48.306004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.584 [2024-11-20 08:54:48.308828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.584 [2024-11-20 08:54:48.308868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:17.584 [2024-11-20 08:54:48.308952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:17.584 [2024-11-20 08:54:48.309018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:17.584 [2024-11-20 08:54:48.309174] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:17.584 [2024-11-20 08:54:48.309193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.584 [2024-11-20 08:54:48.309220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:17.584 [2024-11-20 08:54:48.309293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.584 [2024-11-20 08:54:48.309399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:17.584 [2024-11-20 08:54:48.309414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:17.584 [2024-11-20 08:54:48.309503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:17.584 [2024-11-20 08:54:48.309595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:17.584 [2024-11-20 08:54:48.309613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:17.584 [2024-11-20 08:54:48.309761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.584 pt1 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.584 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.584 "name": "raid_bdev1", 00:21:17.584 "uuid": "91729069-3b3f-483d-a331-f6178e1adda2", 00:21:17.584 "strip_size_kb": 0, 00:21:17.584 "state": "online", 00:21:17.584 "raid_level": "raid1", 00:21:17.584 "superblock": true, 00:21:17.584 "num_base_bdevs": 2, 00:21:17.584 "num_base_bdevs_discovered": 1, 00:21:17.584 "num_base_bdevs_operational": 1, 00:21:17.585 "base_bdevs_list": [ 00:21:17.585 { 00:21:17.585 "name": null, 00:21:17.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.585 "is_configured": false, 00:21:17.585 "data_offset": 256, 00:21:17.585 "data_size": 7936 00:21:17.585 }, 00:21:17.585 { 00:21:17.585 "name": "pt2", 00:21:17.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.585 "is_configured": true, 00:21:17.585 "data_offset": 256, 00:21:17.585 "data_size": 7936 00:21:17.585 } 00:21:17.585 ] 00:21:17.585 }' 00:21:17.585 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.585 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:18.153 [2024-11-20 08:54:48.874350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 91729069-3b3f-483d-a331-f6178e1adda2 '!=' 91729069-3b3f-483d-a331-f6178e1adda2 ']' 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89162 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89162 ']' 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89162 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89162 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:18.153 killing process with pid 89162 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89162' 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89162 00:21:18.153 [2024-11-20 08:54:48.951775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.153 08:54:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89162 00:21:18.153 [2024-11-20 08:54:48.951920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.153 [2024-11-20 08:54:48.951995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.153 [2024-11-20 08:54:48.952020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:18.412 [2024-11-20 08:54:49.156530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:19.790 08:54:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:19.790 00:21:19.790 real 0m6.714s 00:21:19.790 user 0m10.569s 00:21:19.790 sys 0m0.900s 00:21:19.790 08:54:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.790 08:54:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.790 ************************************ 00:21:19.790 END TEST raid_superblock_test_md_interleaved 00:21:19.790 ************************************ 00:21:19.790 08:54:50 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:19.790 08:54:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:19.790 08:54:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.790 08:54:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:19.790 ************************************ 00:21:19.790 START TEST raid_rebuild_test_sb_md_interleaved 00:21:19.790 ************************************ 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89485 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89485 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89485 ']' 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.790 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.791 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.791 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.791 08:54:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:19.791 [2024-11-20 08:54:50.466415] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:19.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:19.791 Zero copy mechanism will not be used. 00:21:19.791 [2024-11-20 08:54:50.466603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89485 ] 00:21:19.791 [2024-11-20 08:54:50.651917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.049 [2024-11-20 08:54:50.809817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.308 [2024-11-20 08:54:51.052897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.308 [2024-11-20 08:54:51.052993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.567 BaseBdev1_malloc 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.567 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 [2024-11-20 08:54:51.483375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:20.827 [2024-11-20 08:54:51.483450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.827 [2024-11-20 08:54:51.483480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:20.827 [2024-11-20 08:54:51.483499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.827 [2024-11-20 08:54:51.486079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.827 [2024-11-20 08:54:51.486142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:20.827 BaseBdev1 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 BaseBdev2_malloc 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 [2024-11-20 08:54:51.540134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:20.827 [2024-11-20 08:54:51.540270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.827 [2024-11-20 08:54:51.540301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:20.827 [2024-11-20 08:54:51.540322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.827 [2024-11-20 08:54:51.543023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.827 [2024-11-20 08:54:51.543069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:20.827 BaseBdev2 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 spare_malloc 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 spare_delay 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 [2024-11-20 08:54:51.614753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:20.827 [2024-11-20 08:54:51.614826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.827 [2024-11-20 08:54:51.614857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:20.827 [2024-11-20 08:54:51.614876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.827 [2024-11-20 08:54:51.617704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.827 [2024-11-20 08:54:51.617752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:20.827 spare 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 [2024-11-20 08:54:51.626818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.827 [2024-11-20 08:54:51.629504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.827 [2024-11-20 08:54:51.629779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:20.827 [2024-11-20 08:54:51.629804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:20.827 [2024-11-20 08:54:51.629909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:20.827 [2024-11-20 08:54:51.630017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:20.827 [2024-11-20 08:54:51.630047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:20.827 [2024-11-20 08:54:51.630142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.827 "name": "raid_bdev1", 00:21:20.827 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:20.827 "strip_size_kb": 0, 00:21:20.827 "state": "online", 00:21:20.827 "raid_level": "raid1", 00:21:20.827 "superblock": true, 00:21:20.827 "num_base_bdevs": 2, 00:21:20.827 "num_base_bdevs_discovered": 2, 00:21:20.827 "num_base_bdevs_operational": 2, 00:21:20.827 "base_bdevs_list": [ 00:21:20.827 { 00:21:20.827 "name": "BaseBdev1", 00:21:20.827 "uuid": "b3fe58db-8d9d-5b81-af32-b58904a68033", 00:21:20.827 "is_configured": true, 00:21:20.827 "data_offset": 256, 00:21:20.827 "data_size": 7936 00:21:20.827 }, 00:21:20.827 { 00:21:20.827 "name": "BaseBdev2", 00:21:20.827 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:20.827 "is_configured": true, 00:21:20.827 "data_offset": 256, 00:21:20.827 "data_size": 7936 00:21:20.827 } 00:21:20.827 ] 00:21:20.827 }' 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.827 08:54:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.395 [2024-11-20 08:54:52.163348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.395 [2024-11-20 08:54:52.266923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.395 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.396 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.654 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.654 "name": "raid_bdev1", 00:21:21.654 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:21.654 "strip_size_kb": 0, 00:21:21.654 "state": "online", 00:21:21.654 "raid_level": "raid1", 00:21:21.654 "superblock": true, 00:21:21.654 "num_base_bdevs": 2, 00:21:21.654 "num_base_bdevs_discovered": 1, 00:21:21.654 "num_base_bdevs_operational": 1, 00:21:21.654 "base_bdevs_list": [ 00:21:21.654 { 00:21:21.654 "name": null, 00:21:21.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.654 "is_configured": false, 00:21:21.654 "data_offset": 0, 00:21:21.654 "data_size": 7936 00:21:21.654 }, 00:21:21.654 { 00:21:21.654 "name": "BaseBdev2", 00:21:21.654 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:21.654 "is_configured": true, 00:21:21.654 "data_offset": 256, 00:21:21.654 "data_size": 7936 00:21:21.654 } 00:21:21.654 ] 00:21:21.654 }' 00:21:21.654 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.654 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.913 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:21.913 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.913 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:21.913 [2024-11-20 08:54:52.783096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.913 [2024-11-20 08:54:52.799499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:21.913 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.913 08:54:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:21.913 [2024-11-20 08:54:52.802139] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.291 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.292 "name": "raid_bdev1", 00:21:23.292 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:23.292 "strip_size_kb": 0, 00:21:23.292 "state": "online", 00:21:23.292 "raid_level": "raid1", 00:21:23.292 "superblock": true, 00:21:23.292 "num_base_bdevs": 2, 00:21:23.292 "num_base_bdevs_discovered": 2, 00:21:23.292 "num_base_bdevs_operational": 2, 00:21:23.292 "process": { 00:21:23.292 "type": "rebuild", 00:21:23.292 "target": "spare", 00:21:23.292 "progress": { 00:21:23.292 "blocks": 2560, 00:21:23.292 "percent": 32 00:21:23.292 } 00:21:23.292 }, 00:21:23.292 "base_bdevs_list": [ 00:21:23.292 { 00:21:23.292 "name": "spare", 00:21:23.292 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:23.292 "is_configured": true, 00:21:23.292 "data_offset": 256, 00:21:23.292 "data_size": 7936 00:21:23.292 }, 00:21:23.292 { 00:21:23.292 "name": "BaseBdev2", 00:21:23.292 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:23.292 "is_configured": true, 00:21:23.292 "data_offset": 256, 00:21:23.292 "data_size": 7936 00:21:23.292 } 00:21:23.292 ] 00:21:23.292 }' 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.292 08:54:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.292 [2024-11-20 08:54:53.971195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.292 [2024-11-20 08:54:54.011179] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.292 [2024-11-20 08:54:54.011308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.292 [2024-11-20 08:54:54.011334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.292 [2024-11-20 08:54:54.011355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.292 "name": "raid_bdev1", 00:21:23.292 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:23.292 "strip_size_kb": 0, 00:21:23.292 "state": "online", 00:21:23.292 "raid_level": "raid1", 00:21:23.292 "superblock": true, 00:21:23.292 "num_base_bdevs": 2, 00:21:23.292 "num_base_bdevs_discovered": 1, 00:21:23.292 "num_base_bdevs_operational": 1, 00:21:23.292 "base_bdevs_list": [ 00:21:23.292 { 00:21:23.292 "name": null, 00:21:23.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.292 "is_configured": false, 00:21:23.292 "data_offset": 0, 00:21:23.292 "data_size": 7936 00:21:23.292 }, 00:21:23.292 { 00:21:23.292 "name": "BaseBdev2", 00:21:23.292 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:23.292 "is_configured": true, 00:21:23.292 "data_offset": 256, 00:21:23.292 "data_size": 7936 00:21:23.292 } 00:21:23.292 ] 00:21:23.292 }' 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.292 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.859 "name": "raid_bdev1", 00:21:23.859 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:23.859 "strip_size_kb": 0, 00:21:23.859 "state": "online", 00:21:23.859 "raid_level": "raid1", 00:21:23.859 "superblock": true, 00:21:23.859 "num_base_bdevs": 2, 00:21:23.859 "num_base_bdevs_discovered": 1, 00:21:23.859 "num_base_bdevs_operational": 1, 00:21:23.859 "base_bdevs_list": [ 00:21:23.859 { 00:21:23.859 "name": null, 00:21:23.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.859 "is_configured": false, 00:21:23.859 "data_offset": 0, 00:21:23.859 "data_size": 7936 00:21:23.859 }, 00:21:23.859 { 00:21:23.859 "name": "BaseBdev2", 00:21:23.859 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:23.859 "is_configured": true, 00:21:23.859 "data_offset": 256, 00:21:23.859 "data_size": 7936 00:21:23.859 } 00:21:23.859 ] 00:21:23.859 }' 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:23.859 [2024-11-20 08:54:54.707902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.859 [2024-11-20 08:54:54.723889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.859 08:54:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:23.859 [2024-11-20 08:54:54.726458] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.231 "name": "raid_bdev1", 00:21:25.231 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:25.231 "strip_size_kb": 0, 00:21:25.231 "state": "online", 00:21:25.231 "raid_level": "raid1", 00:21:25.231 "superblock": true, 00:21:25.231 "num_base_bdevs": 2, 00:21:25.231 "num_base_bdevs_discovered": 2, 00:21:25.231 "num_base_bdevs_operational": 2, 00:21:25.231 "process": { 00:21:25.231 "type": "rebuild", 00:21:25.231 "target": "spare", 00:21:25.231 "progress": { 00:21:25.231 "blocks": 2560, 00:21:25.231 "percent": 32 00:21:25.231 } 00:21:25.231 }, 00:21:25.231 "base_bdevs_list": [ 00:21:25.231 { 00:21:25.231 "name": "spare", 00:21:25.231 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:25.231 "is_configured": true, 00:21:25.231 "data_offset": 256, 00:21:25.231 "data_size": 7936 00:21:25.231 }, 00:21:25.231 { 00:21:25.231 "name": "BaseBdev2", 00:21:25.231 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:25.231 "is_configured": true, 00:21:25.231 "data_offset": 256, 00:21:25.231 "data_size": 7936 00:21:25.231 } 00:21:25.231 ] 00:21:25.231 }' 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.231 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:25.232 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=796 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.232 "name": "raid_bdev1", 00:21:25.232 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:25.232 "strip_size_kb": 0, 00:21:25.232 "state": "online", 00:21:25.232 "raid_level": "raid1", 00:21:25.232 "superblock": true, 00:21:25.232 "num_base_bdevs": 2, 00:21:25.232 "num_base_bdevs_discovered": 2, 00:21:25.232 "num_base_bdevs_operational": 2, 00:21:25.232 "process": { 00:21:25.232 "type": "rebuild", 00:21:25.232 "target": "spare", 00:21:25.232 "progress": { 00:21:25.232 "blocks": 2816, 00:21:25.232 "percent": 35 00:21:25.232 } 00:21:25.232 }, 00:21:25.232 "base_bdevs_list": [ 00:21:25.232 { 00:21:25.232 "name": "spare", 00:21:25.232 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:25.232 "is_configured": true, 00:21:25.232 "data_offset": 256, 00:21:25.232 "data_size": 7936 00:21:25.232 }, 00:21:25.232 { 00:21:25.232 "name": "BaseBdev2", 00:21:25.232 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:25.232 "is_configured": true, 00:21:25.232 "data_offset": 256, 00:21:25.232 "data_size": 7936 00:21:25.232 } 00:21:25.232 ] 00:21:25.232 }' 00:21:25.232 08:54:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.232 08:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.232 08:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.232 08:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.232 08:54:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:26.163 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:26.422 "name": "raid_bdev1", 00:21:26.422 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:26.422 "strip_size_kb": 0, 00:21:26.422 "state": "online", 00:21:26.422 "raid_level": "raid1", 00:21:26.422 "superblock": true, 00:21:26.422 "num_base_bdevs": 2, 00:21:26.422 "num_base_bdevs_discovered": 2, 00:21:26.422 "num_base_bdevs_operational": 2, 00:21:26.422 "process": { 00:21:26.422 "type": "rebuild", 00:21:26.422 "target": "spare", 00:21:26.422 "progress": { 00:21:26.422 "blocks": 5888, 00:21:26.422 "percent": 74 00:21:26.422 } 00:21:26.422 }, 00:21:26.422 "base_bdevs_list": [ 00:21:26.422 { 00:21:26.422 "name": "spare", 00:21:26.422 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:26.422 "is_configured": true, 00:21:26.422 "data_offset": 256, 00:21:26.422 "data_size": 7936 00:21:26.422 }, 00:21:26.422 { 00:21:26.422 "name": "BaseBdev2", 00:21:26.422 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:26.422 "is_configured": true, 00:21:26.422 "data_offset": 256, 00:21:26.422 "data_size": 7936 00:21:26.422 } 00:21:26.422 ] 00:21:26.422 }' 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.422 08:54:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:26.989 [2024-11-20 08:54:57.848871] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:26.989 [2024-11-20 08:54:57.849170] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:26.989 [2024-11-20 08:54:57.849343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.555 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.556 "name": "raid_bdev1", 00:21:27.556 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:27.556 "strip_size_kb": 0, 00:21:27.556 "state": "online", 00:21:27.556 "raid_level": "raid1", 00:21:27.556 "superblock": true, 00:21:27.556 "num_base_bdevs": 2, 00:21:27.556 "num_base_bdevs_discovered": 2, 00:21:27.556 "num_base_bdevs_operational": 2, 00:21:27.556 "base_bdevs_list": [ 00:21:27.556 { 00:21:27.556 "name": "spare", 00:21:27.556 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:27.556 "is_configured": true, 00:21:27.556 "data_offset": 256, 00:21:27.556 "data_size": 7936 00:21:27.556 }, 00:21:27.556 { 00:21:27.556 "name": "BaseBdev2", 00:21:27.556 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:27.556 "is_configured": true, 00:21:27.556 "data_offset": 256, 00:21:27.556 "data_size": 7936 00:21:27.556 } 00:21:27.556 ] 00:21:27.556 }' 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.556 "name": "raid_bdev1", 00:21:27.556 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:27.556 "strip_size_kb": 0, 00:21:27.556 "state": "online", 00:21:27.556 "raid_level": "raid1", 00:21:27.556 "superblock": true, 00:21:27.556 "num_base_bdevs": 2, 00:21:27.556 "num_base_bdevs_discovered": 2, 00:21:27.556 "num_base_bdevs_operational": 2, 00:21:27.556 "base_bdevs_list": [ 00:21:27.556 { 00:21:27.556 "name": "spare", 00:21:27.556 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:27.556 "is_configured": true, 00:21:27.556 "data_offset": 256, 00:21:27.556 "data_size": 7936 00:21:27.556 }, 00:21:27.556 { 00:21:27.556 "name": "BaseBdev2", 00:21:27.556 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:27.556 "is_configured": true, 00:21:27.556 "data_offset": 256, 00:21:27.556 "data_size": 7936 00:21:27.556 } 00:21:27.556 ] 00:21:27.556 }' 00:21:27.556 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.816 "name": "raid_bdev1", 00:21:27.816 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:27.816 "strip_size_kb": 0, 00:21:27.816 "state": "online", 00:21:27.816 "raid_level": "raid1", 00:21:27.816 "superblock": true, 00:21:27.816 "num_base_bdevs": 2, 00:21:27.816 "num_base_bdevs_discovered": 2, 00:21:27.816 "num_base_bdevs_operational": 2, 00:21:27.816 "base_bdevs_list": [ 00:21:27.816 { 00:21:27.816 "name": "spare", 00:21:27.816 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:27.816 "is_configured": true, 00:21:27.816 "data_offset": 256, 00:21:27.816 "data_size": 7936 00:21:27.816 }, 00:21:27.816 { 00:21:27.816 "name": "BaseBdev2", 00:21:27.816 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:27.816 "is_configured": true, 00:21:27.816 "data_offset": 256, 00:21:27.816 "data_size": 7936 00:21:27.816 } 00:21:27.816 ] 00:21:27.816 }' 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.816 08:54:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.391 [2024-11-20 08:54:59.069284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:28.391 [2024-11-20 08:54:59.069327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.391 [2024-11-20 08:54:59.069435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.391 [2024-11-20 08:54:59.069530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.391 [2024-11-20 08:54:59.069548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.391 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.391 [2024-11-20 08:54:59.141272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.391 [2024-11-20 08:54:59.141343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.391 [2024-11-20 08:54:59.141376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:28.391 [2024-11-20 08:54:59.141391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.391 [2024-11-20 08:54:59.144019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.391 [2024-11-20 08:54:59.144067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.391 [2024-11-20 08:54:59.144176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:28.391 [2024-11-20 08:54:59.144261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.391 [2024-11-20 08:54:59.144409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.392 spare 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.392 [2024-11-20 08:54:59.244563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:28.392 [2024-11-20 08:54:59.244802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:28.392 [2024-11-20 08:54:59.244958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:28.392 [2024-11-20 08:54:59.245116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:28.392 [2024-11-20 08:54:59.245132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:28.392 [2024-11-20 08:54:59.245331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.392 "name": "raid_bdev1", 00:21:28.392 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:28.392 "strip_size_kb": 0, 00:21:28.392 "state": "online", 00:21:28.392 "raid_level": "raid1", 00:21:28.392 "superblock": true, 00:21:28.392 "num_base_bdevs": 2, 00:21:28.392 "num_base_bdevs_discovered": 2, 00:21:28.392 "num_base_bdevs_operational": 2, 00:21:28.392 "base_bdevs_list": [ 00:21:28.392 { 00:21:28.392 "name": "spare", 00:21:28.392 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:28.392 "is_configured": true, 00:21:28.392 "data_offset": 256, 00:21:28.392 "data_size": 7936 00:21:28.392 }, 00:21:28.392 { 00:21:28.392 "name": "BaseBdev2", 00:21:28.392 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:28.392 "is_configured": true, 00:21:28.392 "data_offset": 256, 00:21:28.392 "data_size": 7936 00:21:28.392 } 00:21:28.392 ] 00:21:28.392 }' 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.392 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.959 "name": "raid_bdev1", 00:21:28.959 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:28.959 "strip_size_kb": 0, 00:21:28.959 "state": "online", 00:21:28.959 "raid_level": "raid1", 00:21:28.959 "superblock": true, 00:21:28.959 "num_base_bdevs": 2, 00:21:28.959 "num_base_bdevs_discovered": 2, 00:21:28.959 "num_base_bdevs_operational": 2, 00:21:28.959 "base_bdevs_list": [ 00:21:28.959 { 00:21:28.959 "name": "spare", 00:21:28.959 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:28.959 "is_configured": true, 00:21:28.959 "data_offset": 256, 00:21:28.959 "data_size": 7936 00:21:28.959 }, 00:21:28.959 { 00:21:28.959 "name": "BaseBdev2", 00:21:28.959 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:28.959 "is_configured": true, 00:21:28.959 "data_offset": 256, 00:21:28.959 "data_size": 7936 00:21:28.959 } 00:21:28.959 ] 00:21:28.959 }' 00:21:28.959 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.219 [2024-11-20 08:54:59.989688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.219 08:54:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.219 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.219 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.219 "name": "raid_bdev1", 00:21:29.219 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:29.219 "strip_size_kb": 0, 00:21:29.219 "state": "online", 00:21:29.219 "raid_level": "raid1", 00:21:29.219 "superblock": true, 00:21:29.219 "num_base_bdevs": 2, 00:21:29.219 "num_base_bdevs_discovered": 1, 00:21:29.219 "num_base_bdevs_operational": 1, 00:21:29.219 "base_bdevs_list": [ 00:21:29.219 { 00:21:29.219 "name": null, 00:21:29.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.219 "is_configured": false, 00:21:29.219 "data_offset": 0, 00:21:29.219 "data_size": 7936 00:21:29.219 }, 00:21:29.219 { 00:21:29.219 "name": "BaseBdev2", 00:21:29.219 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:29.219 "is_configured": true, 00:21:29.219 "data_offset": 256, 00:21:29.219 "data_size": 7936 00:21:29.219 } 00:21:29.219 ] 00:21:29.219 }' 00:21:29.219 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.219 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:29.787 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.787 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 [2024-11-20 08:55:00.505909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.787 [2024-11-20 08:55:00.506161] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:29.787 [2024-11-20 08:55:00.506190] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:29.787 [2024-11-20 08:55:00.506249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.787 [2024-11-20 08:55:00.522068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:29.787 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.787 08:55:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:29.787 [2024-11-20 08:55:00.524624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.723 "name": "raid_bdev1", 00:21:30.723 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:30.723 "strip_size_kb": 0, 00:21:30.723 "state": "online", 00:21:30.723 "raid_level": "raid1", 00:21:30.723 "superblock": true, 00:21:30.723 "num_base_bdevs": 2, 00:21:30.723 "num_base_bdevs_discovered": 2, 00:21:30.723 "num_base_bdevs_operational": 2, 00:21:30.723 "process": { 00:21:30.723 "type": "rebuild", 00:21:30.723 "target": "spare", 00:21:30.723 "progress": { 00:21:30.723 "blocks": 2560, 00:21:30.723 "percent": 32 00:21:30.723 } 00:21:30.723 }, 00:21:30.723 "base_bdevs_list": [ 00:21:30.723 { 00:21:30.723 "name": "spare", 00:21:30.723 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:30.723 "is_configured": true, 00:21:30.723 "data_offset": 256, 00:21:30.723 "data_size": 7936 00:21:30.723 }, 00:21:30.723 { 00:21:30.723 "name": "BaseBdev2", 00:21:30.723 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:30.723 "is_configured": true, 00:21:30.723 "data_offset": 256, 00:21:30.723 "data_size": 7936 00:21:30.723 } 00:21:30.723 ] 00:21:30.723 }' 00:21:30.723 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.985 [2024-11-20 08:55:01.690076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.985 [2024-11-20 08:55:01.734019] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:30.985 [2024-11-20 08:55:01.734122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.985 [2024-11-20 08:55:01.734168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.985 [2024-11-20 08:55:01.734188] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.985 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.986 "name": "raid_bdev1", 00:21:30.986 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:30.986 "strip_size_kb": 0, 00:21:30.986 "state": "online", 00:21:30.986 "raid_level": "raid1", 00:21:30.986 "superblock": true, 00:21:30.986 "num_base_bdevs": 2, 00:21:30.986 "num_base_bdevs_discovered": 1, 00:21:30.986 "num_base_bdevs_operational": 1, 00:21:30.986 "base_bdevs_list": [ 00:21:30.986 { 00:21:30.986 "name": null, 00:21:30.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.986 "is_configured": false, 00:21:30.986 "data_offset": 0, 00:21:30.986 "data_size": 7936 00:21:30.986 }, 00:21:30.986 { 00:21:30.986 "name": "BaseBdev2", 00:21:30.986 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:30.986 "is_configured": true, 00:21:30.986 "data_offset": 256, 00:21:30.986 "data_size": 7936 00:21:30.986 } 00:21:30.986 ] 00:21:30.986 }' 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.986 08:55:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.555 08:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:31.555 08:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.555 08:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.555 [2024-11-20 08:55:02.290114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:31.555 [2024-11-20 08:55:02.290241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.555 [2024-11-20 08:55:02.290275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:31.555 [2024-11-20 08:55:02.290294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.555 [2024-11-20 08:55:02.290553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.555 [2024-11-20 08:55:02.290585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:31.555 [2024-11-20 08:55:02.290671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:31.555 [2024-11-20 08:55:02.290713] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:31.555 [2024-11-20 08:55:02.290729] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:31.555 [2024-11-20 08:55:02.290769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.555 [2024-11-20 08:55:02.307624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:31.555 spare 00:21:31.555 08:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.555 08:55:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:31.555 [2024-11-20 08:55:02.310182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.493 "name": "raid_bdev1", 00:21:32.493 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:32.493 "strip_size_kb": 0, 00:21:32.493 "state": "online", 00:21:32.493 "raid_level": "raid1", 00:21:32.493 "superblock": true, 00:21:32.493 "num_base_bdevs": 2, 00:21:32.493 "num_base_bdevs_discovered": 2, 00:21:32.493 "num_base_bdevs_operational": 2, 00:21:32.493 "process": { 00:21:32.493 "type": "rebuild", 00:21:32.493 "target": "spare", 00:21:32.493 "progress": { 00:21:32.493 "blocks": 2560, 00:21:32.493 "percent": 32 00:21:32.493 } 00:21:32.493 }, 00:21:32.493 "base_bdevs_list": [ 00:21:32.493 { 00:21:32.493 "name": "spare", 00:21:32.493 "uuid": "cbf2b182-ed31-5daf-b325-57c1d2aa2cbc", 00:21:32.493 "is_configured": true, 00:21:32.493 "data_offset": 256, 00:21:32.493 "data_size": 7936 00:21:32.493 }, 00:21:32.493 { 00:21:32.493 "name": "BaseBdev2", 00:21:32.493 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:32.493 "is_configured": true, 00:21:32.493 "data_offset": 256, 00:21:32.493 "data_size": 7936 00:21:32.493 } 00:21:32.493 ] 00:21:32.493 }' 00:21:32.493 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.752 [2024-11-20 08:55:03.476276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.752 [2024-11-20 08:55:03.519047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:32.752 [2024-11-20 08:55:03.519143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.752 [2024-11-20 08:55:03.519191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.752 [2024-11-20 08:55:03.519205] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.752 "name": "raid_bdev1", 00:21:32.752 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:32.752 "strip_size_kb": 0, 00:21:32.752 "state": "online", 00:21:32.752 "raid_level": "raid1", 00:21:32.752 "superblock": true, 00:21:32.752 "num_base_bdevs": 2, 00:21:32.752 "num_base_bdevs_discovered": 1, 00:21:32.752 "num_base_bdevs_operational": 1, 00:21:32.752 "base_bdevs_list": [ 00:21:32.752 { 00:21:32.752 "name": null, 00:21:32.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.752 "is_configured": false, 00:21:32.752 "data_offset": 0, 00:21:32.752 "data_size": 7936 00:21:32.752 }, 00:21:32.752 { 00:21:32.752 "name": "BaseBdev2", 00:21:32.752 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:32.752 "is_configured": true, 00:21:32.752 "data_offset": 256, 00:21:32.752 "data_size": 7936 00:21:32.752 } 00:21:32.752 ] 00:21:32.752 }' 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.752 08:55:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.321 "name": "raid_bdev1", 00:21:33.321 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:33.321 "strip_size_kb": 0, 00:21:33.321 "state": "online", 00:21:33.321 "raid_level": "raid1", 00:21:33.321 "superblock": true, 00:21:33.321 "num_base_bdevs": 2, 00:21:33.321 "num_base_bdevs_discovered": 1, 00:21:33.321 "num_base_bdevs_operational": 1, 00:21:33.321 "base_bdevs_list": [ 00:21:33.321 { 00:21:33.321 "name": null, 00:21:33.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.321 "is_configured": false, 00:21:33.321 "data_offset": 0, 00:21:33.321 "data_size": 7936 00:21:33.321 }, 00:21:33.321 { 00:21:33.321 "name": "BaseBdev2", 00:21:33.321 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:33.321 "is_configured": true, 00:21:33.321 "data_offset": 256, 00:21:33.321 "data_size": 7936 00:21:33.321 } 00:21:33.321 ] 00:21:33.321 }' 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 [2024-11-20 08:55:04.227797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:33.321 [2024-11-20 08:55:04.227873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.321 [2024-11-20 08:55:04.227911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:33.321 [2024-11-20 08:55:04.227926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.321 [2024-11-20 08:55:04.228134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.321 [2024-11-20 08:55:04.228191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:33.321 [2024-11-20 08:55:04.228274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:33.321 [2024-11-20 08:55:04.228295] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:33.321 [2024-11-20 08:55:04.228310] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:33.321 [2024-11-20 08:55:04.228323] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:33.321 BaseBdev1 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.321 08:55:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.729 "name": "raid_bdev1", 00:21:34.729 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:34.729 "strip_size_kb": 0, 00:21:34.729 "state": "online", 00:21:34.729 "raid_level": "raid1", 00:21:34.729 "superblock": true, 00:21:34.729 "num_base_bdevs": 2, 00:21:34.729 "num_base_bdevs_discovered": 1, 00:21:34.729 "num_base_bdevs_operational": 1, 00:21:34.729 "base_bdevs_list": [ 00:21:34.729 { 00:21:34.729 "name": null, 00:21:34.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.729 "is_configured": false, 00:21:34.729 "data_offset": 0, 00:21:34.729 "data_size": 7936 00:21:34.729 }, 00:21:34.729 { 00:21:34.729 "name": "BaseBdev2", 00:21:34.729 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:34.729 "is_configured": true, 00:21:34.729 "data_offset": 256, 00:21:34.729 "data_size": 7936 00:21:34.729 } 00:21:34.729 ] 00:21:34.729 }' 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.729 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.988 "name": "raid_bdev1", 00:21:34.988 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:34.988 "strip_size_kb": 0, 00:21:34.988 "state": "online", 00:21:34.988 "raid_level": "raid1", 00:21:34.988 "superblock": true, 00:21:34.988 "num_base_bdevs": 2, 00:21:34.988 "num_base_bdevs_discovered": 1, 00:21:34.988 "num_base_bdevs_operational": 1, 00:21:34.988 "base_bdevs_list": [ 00:21:34.988 { 00:21:34.988 "name": null, 00:21:34.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.988 "is_configured": false, 00:21:34.988 "data_offset": 0, 00:21:34.988 "data_size": 7936 00:21:34.988 }, 00:21:34.988 { 00:21:34.988 "name": "BaseBdev2", 00:21:34.988 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:34.988 "is_configured": true, 00:21:34.988 "data_offset": 256, 00:21:34.988 "data_size": 7936 00:21:34.988 } 00:21:34.988 ] 00:21:34.988 }' 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.988 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.988 [2024-11-20 08:55:05.900363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.988 [2024-11-20 08:55:05.900564] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:34.988 [2024-11-20 08:55:05.900593] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:35.247 request: 00:21:35.247 { 00:21:35.247 "base_bdev": "BaseBdev1", 00:21:35.247 "raid_bdev": "raid_bdev1", 00:21:35.247 "method": "bdev_raid_add_base_bdev", 00:21:35.247 "req_id": 1 00:21:35.247 } 00:21:35.247 Got JSON-RPC error response 00:21:35.247 response: 00:21:35.247 { 00:21:35.247 "code": -22, 00:21:35.247 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:35.247 } 00:21:35.247 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:35.247 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:35.247 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.247 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.247 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.247 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.181 "name": "raid_bdev1", 00:21:36.181 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:36.181 "strip_size_kb": 0, 00:21:36.181 "state": "online", 00:21:36.181 "raid_level": "raid1", 00:21:36.181 "superblock": true, 00:21:36.181 "num_base_bdevs": 2, 00:21:36.181 "num_base_bdevs_discovered": 1, 00:21:36.181 "num_base_bdevs_operational": 1, 00:21:36.181 "base_bdevs_list": [ 00:21:36.181 { 00:21:36.181 "name": null, 00:21:36.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.181 "is_configured": false, 00:21:36.181 "data_offset": 0, 00:21:36.181 "data_size": 7936 00:21:36.181 }, 00:21:36.181 { 00:21:36.181 "name": "BaseBdev2", 00:21:36.181 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:36.181 "is_configured": true, 00:21:36.181 "data_offset": 256, 00:21:36.181 "data_size": 7936 00:21:36.181 } 00:21:36.181 ] 00:21:36.181 }' 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.181 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.749 "name": "raid_bdev1", 00:21:36.749 "uuid": "1f877212-2e5a-4d8c-9e5a-86b131ca3dce", 00:21:36.749 "strip_size_kb": 0, 00:21:36.749 "state": "online", 00:21:36.749 "raid_level": "raid1", 00:21:36.749 "superblock": true, 00:21:36.749 "num_base_bdevs": 2, 00:21:36.749 "num_base_bdevs_discovered": 1, 00:21:36.749 "num_base_bdevs_operational": 1, 00:21:36.749 "base_bdevs_list": [ 00:21:36.749 { 00:21:36.749 "name": null, 00:21:36.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.749 "is_configured": false, 00:21:36.749 "data_offset": 0, 00:21:36.749 "data_size": 7936 00:21:36.749 }, 00:21:36.749 { 00:21:36.749 "name": "BaseBdev2", 00:21:36.749 "uuid": "7aa3fe93-b54e-59ba-aeff-b098f0020e61", 00:21:36.749 "is_configured": true, 00:21:36.749 "data_offset": 256, 00:21:36.749 "data_size": 7936 00:21:36.749 } 00:21:36.749 ] 00:21:36.749 }' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89485 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89485 ']' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89485 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89485 00:21:36.749 killing process with pid 89485 00:21:36.749 Received shutdown signal, test time was about 60.000000 seconds 00:21:36.749 00:21:36.749 Latency(us) 00:21:36.749 [2024-11-20T08:55:07.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.749 [2024-11-20T08:55:07.665Z] =================================================================================================================== 00:21:36.749 [2024-11-20T08:55:07.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89485' 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89485 00:21:36.749 [2024-11-20 08:55:07.579183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.749 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89485 00:21:36.749 [2024-11-20 08:55:07.579357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.749 [2024-11-20 08:55:07.579420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.749 [2024-11-20 08:55:07.579441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:37.008 [2024-11-20 08:55:07.849413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.945 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:37.945 00:21:37.945 real 0m18.508s 00:21:37.945 user 0m25.240s 00:21:37.945 sys 0m1.409s 00:21:38.205 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.205 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.205 ************************************ 00:21:38.205 END TEST raid_rebuild_test_sb_md_interleaved 00:21:38.205 ************************************ 00:21:38.205 08:55:08 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:38.205 08:55:08 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:38.205 08:55:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89485 ']' 00:21:38.205 08:55:08 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89485 00:21:38.205 08:55:08 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:38.205 00:21:38.205 real 12m59.687s 00:21:38.205 user 18m22.763s 00:21:38.205 sys 1m44.722s 00:21:38.205 08:55:08 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.205 08:55:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.205 ************************************ 00:21:38.205 END TEST bdev_raid 00:21:38.205 ************************************ 00:21:38.205 08:55:08 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:38.205 08:55:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:38.205 08:55:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.205 08:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.205 ************************************ 00:21:38.205 START TEST spdkcli_raid 00:21:38.205 ************************************ 00:21:38.205 08:55:08 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:38.205 * Looking for test storage... 00:21:38.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:38.205 08:55:09 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.205 08:55:09 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.205 08:55:09 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.468 08:55:09 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.468 08:55:09 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:38.468 08:55:09 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.468 08:55:09 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.468 --rc genhtml_branch_coverage=1 00:21:38.468 --rc genhtml_function_coverage=1 00:21:38.468 --rc genhtml_legend=1 00:21:38.468 --rc geninfo_all_blocks=1 00:21:38.468 --rc geninfo_unexecuted_blocks=1 00:21:38.468 00:21:38.468 ' 00:21:38.468 08:55:09 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.468 --rc genhtml_branch_coverage=1 00:21:38.468 --rc genhtml_function_coverage=1 00:21:38.468 --rc genhtml_legend=1 00:21:38.468 --rc geninfo_all_blocks=1 00:21:38.468 --rc geninfo_unexecuted_blocks=1 00:21:38.468 00:21:38.468 ' 00:21:38.468 08:55:09 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.468 --rc genhtml_branch_coverage=1 00:21:38.468 --rc genhtml_function_coverage=1 00:21:38.468 --rc genhtml_legend=1 00:21:38.468 --rc geninfo_all_blocks=1 00:21:38.468 --rc geninfo_unexecuted_blocks=1 00:21:38.468 00:21:38.468 ' 00:21:38.468 08:55:09 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.468 --rc genhtml_branch_coverage=1 00:21:38.468 --rc genhtml_function_coverage=1 00:21:38.468 --rc genhtml_legend=1 00:21:38.468 --rc geninfo_all_blocks=1 00:21:38.468 --rc geninfo_unexecuted_blocks=1 00:21:38.468 00:21:38.468 ' 00:21:38.468 08:55:09 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:38.468 08:55:09 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:38.468 08:55:09 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:38.468 08:55:09 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:38.468 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:38.469 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:38.469 08:55:09 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90173 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90173 00:21:38.469 08:55:09 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90173 ']' 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.469 08:55:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.469 [2024-11-20 08:55:09.301222] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:38.469 [2024-11-20 08:55:09.301378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90173 ] 00:21:38.729 [2024-11-20 08:55:09.478638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:38.729 [2024-11-20 08:55:09.610456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.729 [2024-11-20 08:55:09.610467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.665 08:55:10 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.665 08:55:10 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:21:39.665 08:55:10 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:39.665 08:55:10 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.665 08:55:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:39.665 08:55:10 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:39.665 08:55:10 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.665 08:55:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:39.665 08:55:10 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:39.665 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:39.665 ' 00:21:41.568 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:41.568 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:41.568 08:55:12 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:41.568 08:55:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.568 08:55:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.568 08:55:12 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:41.568 08:55:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.568 08:55:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.568 08:55:12 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:41.568 ' 00:21:42.504 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:42.763 08:55:13 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:42.763 08:55:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.763 08:55:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.763 08:55:13 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:42.763 08:55:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.763 08:55:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.763 08:55:13 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:42.763 08:55:13 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:43.331 08:55:14 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:43.331 08:55:14 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:43.331 08:55:14 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:43.331 08:55:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.331 08:55:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:43.331 08:55:14 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:43.331 08:55:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.331 08:55:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:43.331 08:55:14 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:43.331 ' 00:21:44.370 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:44.684 08:55:15 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:44.684 08:55:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.684 08:55:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.684 08:55:15 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:44.684 08:55:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.684 08:55:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.684 08:55:15 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:44.684 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:44.684 ' 00:21:46.061 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:46.061 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:46.061 08:55:16 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:46.061 08:55:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90173 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90173 ']' 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90173 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90173 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.061 killing process with pid 90173 00:21:46.061 08:55:16 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.062 08:55:16 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90173' 00:21:46.062 08:55:16 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90173 00:21:46.062 08:55:16 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90173 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90173 ']' 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90173 00:21:48.661 Process with pid 90173 is not found 00:21:48.661 08:55:19 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90173 ']' 00:21:48.661 08:55:19 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90173 00:21:48.661 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90173) - No such process 00:21:48.661 08:55:19 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90173 is not found' 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:48.661 08:55:19 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:48.661 ************************************ 00:21:48.661 END TEST spdkcli_raid 00:21:48.661 ************************************ 00:21:48.661 00:21:48.661 real 0m10.164s 00:21:48.661 user 0m21.200s 00:21:48.661 sys 0m1.168s 00:21:48.661 08:55:19 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.661 08:55:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.661 08:55:19 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:48.661 08:55:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.661 08:55:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.661 08:55:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.661 ************************************ 00:21:48.661 START TEST blockdev_raid5f 00:21:48.661 ************************************ 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:48.661 * Looking for test storage... 00:21:48.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.661 08:55:19 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:48.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.661 --rc genhtml_branch_coverage=1 00:21:48.661 --rc genhtml_function_coverage=1 00:21:48.661 --rc genhtml_legend=1 00:21:48.661 --rc geninfo_all_blocks=1 00:21:48.661 --rc geninfo_unexecuted_blocks=1 00:21:48.661 00:21:48.661 ' 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:48.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.661 --rc genhtml_branch_coverage=1 00:21:48.661 --rc genhtml_function_coverage=1 00:21:48.661 --rc genhtml_legend=1 00:21:48.661 --rc geninfo_all_blocks=1 00:21:48.661 --rc geninfo_unexecuted_blocks=1 00:21:48.661 00:21:48.661 ' 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:48.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.661 --rc genhtml_branch_coverage=1 00:21:48.661 --rc genhtml_function_coverage=1 00:21:48.661 --rc genhtml_legend=1 00:21:48.661 --rc geninfo_all_blocks=1 00:21:48.661 --rc geninfo_unexecuted_blocks=1 00:21:48.661 00:21:48.661 ' 00:21:48.661 08:55:19 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:48.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.661 --rc genhtml_branch_coverage=1 00:21:48.661 --rc genhtml_function_coverage=1 00:21:48.661 --rc genhtml_legend=1 00:21:48.661 --rc geninfo_all_blocks=1 00:21:48.661 --rc geninfo_unexecuted_blocks=1 00:21:48.661 00:21:48.661 ' 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:48.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90448 00:21:48.661 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:48.662 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90448 00:21:48.662 08:55:19 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:48.662 08:55:19 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90448 ']' 00:21:48.662 08:55:19 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.662 08:55:19 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.662 08:55:19 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.662 08:55:19 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.662 08:55:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:48.662 [2024-11-20 08:55:19.507909] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:48.662 [2024-11-20 08:55:19.508314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90448 ] 00:21:48.920 [2024-11-20 08:55:19.683124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.920 [2024-11-20 08:55:19.811804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.856 08:55:20 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.856 08:55:20 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:21:49.856 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:49.856 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:21:49.856 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:49.856 08:55:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.856 08:55:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:49.856 Malloc0 00:21:49.856 Malloc1 00:21:50.115 Malloc2 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:50.115 08:55:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:50.115 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:50.116 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "114d57b0-3425-4246-9a98-0f7b58dc6702"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "114d57b0-3425-4246-9a98-0f7b58dc6702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "114d57b0-3425-4246-9a98-0f7b58dc6702",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9af9d836-5a4b-4a4d-8999-739587bc4ed6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fa778f44-fb9f-41ad-a85c-1310862df349",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f58b08c8-b16a-4d23-b0f2-1110371bfb82",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:50.116 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:50.116 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:21:50.116 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:50.116 08:55:20 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90448 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90448 ']' 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90448 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90448 00:21:50.116 killing process with pid 90448 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.116 08:55:20 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90448' 00:21:50.116 08:55:21 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90448 00:21:50.116 08:55:21 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90448 00:21:52.660 08:55:23 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:52.660 08:55:23 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:52.660 08:55:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:52.660 08:55:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.660 08:55:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:52.660 ************************************ 00:21:52.660 START TEST bdev_hello_world 00:21:52.660 ************************************ 00:21:52.660 08:55:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:52.918 [2024-11-20 08:55:23.611850] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:52.918 [2024-11-20 08:55:23.612227] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90514 ] 00:21:52.918 [2024-11-20 08:55:23.788233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.177 [2024-11-20 08:55:23.918442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.744 [2024-11-20 08:55:24.453355] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:53.744 [2024-11-20 08:55:24.453610] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:53.744 [2024-11-20 08:55:24.453650] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:53.744 [2024-11-20 08:55:24.454264] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:53.744 [2024-11-20 08:55:24.454473] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:53.744 [2024-11-20 08:55:24.454529] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:53.744 [2024-11-20 08:55:24.454613] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:53.744 00:21:53.744 [2024-11-20 08:55:24.454644] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:55.121 00:21:55.121 real 0m2.203s 00:21:55.121 user 0m1.757s 00:21:55.121 sys 0m0.313s 00:21:55.121 08:55:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.121 ************************************ 00:21:55.121 END TEST bdev_hello_world 00:21:55.121 ************************************ 00:21:55.121 08:55:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:55.121 08:55:25 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:55.121 08:55:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.121 08:55:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.121 08:55:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:55.121 ************************************ 00:21:55.121 START TEST bdev_bounds 00:21:55.121 ************************************ 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:55.121 Process bdevio pid: 90555 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90555 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90555' 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90555 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90555 ']' 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.121 08:55:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:55.121 [2024-11-20 08:55:25.864097] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:55.121 [2024-11-20 08:55:25.864861] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90555 ] 00:21:55.380 [2024-11-20 08:55:26.040709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:55.380 [2024-11-20 08:55:26.173821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.380 [2024-11-20 08:55:26.173963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.380 [2024-11-20 08:55:26.173975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.948 08:55:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.948 08:55:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:55.948 08:55:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:56.206 I/O targets: 00:21:56.206 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:56.207 00:21:56.207 00:21:56.207 CUnit - A unit testing framework for C - Version 2.1-3 00:21:56.207 http://cunit.sourceforge.net/ 00:21:56.207 00:21:56.207 00:21:56.207 Suite: bdevio tests on: raid5f 00:21:56.207 Test: blockdev write read block ...passed 00:21:56.207 Test: blockdev write zeroes read block ...passed 00:21:56.207 Test: blockdev write zeroes read no split ...passed 00:21:56.207 Test: blockdev write zeroes read split ...passed 00:21:56.464 Test: blockdev write zeroes read split partial ...passed 00:21:56.464 Test: blockdev reset ...passed 00:21:56.464 Test: blockdev write read 8 blocks ...passed 00:21:56.464 Test: blockdev write read size > 128k ...passed 00:21:56.464 Test: blockdev write read invalid size ...passed 00:21:56.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.464 Test: blockdev write read max offset ...passed 00:21:56.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.464 Test: blockdev writev readv 8 blocks ...passed 00:21:56.464 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.464 Test: blockdev writev readv block ...passed 00:21:56.464 Test: blockdev writev readv size > 128k ...passed 00:21:56.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.464 Test: blockdev comparev and writev ...passed 00:21:56.464 Test: blockdev nvme passthru rw ...passed 00:21:56.464 Test: blockdev nvme passthru vendor specific ...passed 00:21:56.464 Test: blockdev nvme admin passthru ...passed 00:21:56.464 Test: blockdev copy ...passed 00:21:56.464 00:21:56.464 Run Summary: Type Total Ran Passed Failed Inactive 00:21:56.464 suites 1 1 n/a 0 0 00:21:56.464 tests 23 23 23 0 0 00:21:56.464 asserts 130 130 130 0 n/a 00:21:56.464 00:21:56.464 Elapsed time = 0.571 seconds 00:21:56.464 0 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90555 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90555 ']' 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90555 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90555 00:21:56.464 killing process with pid 90555 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90555' 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90555 00:21:56.464 08:55:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90555 00:21:57.860 08:55:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:57.860 00:21:57.860 real 0m2.773s 00:21:57.860 user 0m6.863s 00:21:57.860 sys 0m0.441s 00:21:57.860 08:55:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.860 08:55:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:57.860 ************************************ 00:21:57.860 END TEST bdev_bounds 00:21:57.860 ************************************ 00:21:57.860 08:55:28 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:57.860 08:55:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:57.860 08:55:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.860 08:55:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:57.860 ************************************ 00:21:57.860 START TEST bdev_nbd 00:21:57.860 ************************************ 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90617 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90617 /var/tmp/spdk-nbd.sock 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90617 ']' 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:57.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.860 08:55:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:57.860 [2024-11-20 08:55:28.691962] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:21:57.860 [2024-11-20 08:55:28.693278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.119 [2024-11-20 08:55:28.879923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.119 [2024-11-20 08:55:29.011638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.056 1+0 records in 00:21:59.056 1+0 records out 00:21:59.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294194 s, 13.9 MB/s 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:59.056 08:55:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:59.315 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:59.315 { 00:21:59.315 "nbd_device": "/dev/nbd0", 00:21:59.315 "bdev_name": "raid5f" 00:21:59.315 } 00:21:59.315 ]' 00:21:59.315 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:59.315 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:59.315 { 00:21:59.315 "nbd_device": "/dev/nbd0", 00:21:59.315 "bdev_name": "raid5f" 00:21:59.315 } 00:21:59.315 ]' 00:21:59.315 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:59.574 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.834 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.092 08:55:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:00.351 /dev/nbd0 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.351 1+0 records in 00:22:00.351 1+0 records out 00:22:00.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398919 s, 10.3 MB/s 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.351 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:00.352 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.352 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.352 08:55:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:00.352 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.352 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:00.611 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:00.611 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:00.611 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:00.869 { 00:22:00.869 "nbd_device": "/dev/nbd0", 00:22:00.869 "bdev_name": "raid5f" 00:22:00.869 } 00:22:00.869 ]' 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:00.869 { 00:22:00.869 "nbd_device": "/dev/nbd0", 00:22:00.869 "bdev_name": "raid5f" 00:22:00.869 } 00:22:00.869 ]' 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:00.869 256+0 records in 00:22:00.869 256+0 records out 00:22:00.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105942 s, 99.0 MB/s 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:00.869 256+0 records in 00:22:00.869 256+0 records out 00:22:00.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0406316 s, 25.8 MB/s 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:00.869 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.870 08:55:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:01.128 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:01.128 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:01.128 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:01.128 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.386 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.386 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:01.387 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:01.387 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.387 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:01.387 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:01.387 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:01.644 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:01.645 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:01.921 malloc_lvol_verify 00:22:01.921 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:02.185 a6c302c4-69ce-48b2-99b8-cd7eedb38428 00:22:02.185 08:55:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:02.443 328fce67-940b-415f-83f4-96ee4b406514 00:22:02.444 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:02.702 /dev/nbd0 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:02.702 mke2fs 1.47.0 (5-Feb-2023) 00:22:02.702 Discarding device blocks: 0/4096 done 00:22:02.702 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:02.702 00:22:02.702 Allocating group tables: 0/1 done 00:22:02.702 Writing inode tables: 0/1 done 00:22:02.702 Creating journal (1024 blocks): done 00:22:02.702 Writing superblocks and filesystem accounting information: 0/1 done 00:22:02.702 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.702 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90617 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90617 ']' 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90617 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90617 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.960 killing process with pid 90617 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90617' 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90617 00:22:02.960 08:55:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90617 00:22:04.336 08:55:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:04.336 00:22:04.336 real 0m6.524s 00:22:04.336 user 0m9.415s 00:22:04.336 sys 0m1.383s 00:22:04.336 08:55:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.336 08:55:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:04.336 ************************************ 00:22:04.336 END TEST bdev_nbd 00:22:04.336 ************************************ 00:22:04.336 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:04.336 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:22:04.336 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:22:04.336 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:04.336 08:55:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.336 08:55:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.336 08:55:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:04.336 ************************************ 00:22:04.336 START TEST bdev_fio 00:22:04.336 ************************************ 00:22:04.336 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:04.336 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:04.337 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:04.337 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:04.596 ************************************ 00:22:04.596 START TEST bdev_fio_rw_verify 00:22:04.596 ************************************ 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:04.596 08:55:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:04.855 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:04.855 fio-3.35 00:22:04.855 Starting 1 thread 00:22:17.113 00:22:17.113 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90823: Wed Nov 20 08:55:46 2024 00:22:17.113 read: IOPS=8530, BW=33.3MiB/s (34.9MB/s)(333MiB/10001msec) 00:22:17.113 slat (usec): min=22, max=105, avg=28.86, stdev= 5.66 00:22:17.113 clat (usec): min=14, max=590, avg=186.39, stdev=69.32 00:22:17.113 lat (usec): min=42, max=643, avg=215.24, stdev=70.04 00:22:17.113 clat percentiles (usec): 00:22:17.113 | 50.000th=[ 188], 99.000th=[ 326], 99.900th=[ 388], 99.990th=[ 445], 00:22:17.113 | 99.999th=[ 594] 00:22:17.113 write: IOPS=8932, BW=34.9MiB/s (36.6MB/s)(344MiB/9869msec); 0 zone resets 00:22:17.113 slat (usec): min=11, max=210, avg=23.45, stdev= 6.16 00:22:17.113 clat (usec): min=82, max=1235, avg=430.03, stdev=57.33 00:22:17.113 lat (usec): min=104, max=1445, avg=453.48, stdev=58.78 00:22:17.113 clat percentiles (usec): 00:22:17.113 | 50.000th=[ 437], 99.000th=[ 570], 99.900th=[ 693], 99.990th=[ 1004], 00:22:17.113 | 99.999th=[ 1237] 00:22:17.113 bw ( KiB/s): min=33584, max=38104, per=98.65%, avg=35248.00, stdev=1604.78, samples=19 00:22:17.113 iops : min= 8396, max= 9526, avg=8812.00, stdev=401.19, samples=19 00:22:17.113 lat (usec) : 20=0.01%, 50=0.01%, 100=5.87%, 250=32.43%, 500=57.70% 00:22:17.113 lat (usec) : 750=3.97%, 1000=0.01% 00:22:17.113 lat (msec) : 2=0.01% 00:22:17.113 cpu : usr=98.64%, sys=0.41%, ctx=20, majf=0, minf=7390 00:22:17.113 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:17.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.113 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.113 issued rwts: total=85310,88157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.113 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:17.113 00:22:17.113 Run status group 0 (all jobs): 00:22:17.113 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=333MiB (349MB), run=10001-10001msec 00:22:17.113 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=344MiB (361MB), run=9869-9869msec 00:22:17.113 ----------------------------------------------------- 00:22:17.113 Suppressions used: 00:22:17.113 count bytes template 00:22:17.113 1 7 /usr/src/fio/parse.c 00:22:17.113 214 20544 /usr/src/fio/iolog.c 00:22:17.113 1 8 libtcmalloc_minimal.so 00:22:17.113 1 904 libcrypto.so 00:22:17.113 ----------------------------------------------------- 00:22:17.113 00:22:17.113 00:22:17.113 real 0m12.734s 00:22:17.113 user 0m13.024s 00:22:17.113 sys 0m0.808s 00:22:17.113 08:55:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.113 08:55:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:17.113 ************************************ 00:22:17.113 END TEST bdev_fio_rw_verify 00:22:17.113 ************************************ 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "114d57b0-3425-4246-9a98-0f7b58dc6702"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "114d57b0-3425-4246-9a98-0f7b58dc6702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "114d57b0-3425-4246-9a98-0f7b58dc6702",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9af9d836-5a4b-4a4d-8999-739587bc4ed6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fa778f44-fb9f-41ad-a85c-1310862df349",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f58b08c8-b16a-4d23-b0f2-1110371bfb82",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:17.373 /home/vagrant/spdk_repo/spdk 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:17.373 00:22:17.373 real 0m12.956s 00:22:17.373 user 0m13.129s 00:22:17.373 sys 0m0.903s 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.373 08:55:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:17.373 ************************************ 00:22:17.373 END TEST bdev_fio 00:22:17.373 ************************************ 00:22:17.373 08:55:48 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:17.373 08:55:48 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:17.373 08:55:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:17.373 08:55:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.373 08:55:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:17.373 ************************************ 00:22:17.373 START TEST bdev_verify 00:22:17.373 ************************************ 00:22:17.373 08:55:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:17.373 [2024-11-20 08:55:48.267340] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:17.373 [2024-11-20 08:55:48.267498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90988 ] 00:22:17.632 [2024-11-20 08:55:48.440672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:17.892 [2024-11-20 08:55:48.569831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.892 [2024-11-20 08:55:48.569832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.459 Running I/O for 5 seconds... 00:22:20.333 11740.00 IOPS, 45.86 MiB/s [2024-11-20T08:55:52.186Z] 12480.50 IOPS, 48.75 MiB/s [2024-11-20T08:55:53.564Z] 13028.67 IOPS, 50.89 MiB/s [2024-11-20T08:55:54.132Z] 12822.50 IOPS, 50.09 MiB/s [2024-11-20T08:55:54.391Z] 12884.20 IOPS, 50.33 MiB/s 00:22:23.475 Latency(us) 00:22:23.475 [2024-11-20T08:55:54.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.475 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:23.476 Verification LBA range: start 0x0 length 0x2000 00:22:23.476 raid5f : 5.01 6427.69 25.11 0.00 0.00 29893.44 258.79 26929.34 00:22:23.476 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:23.476 Verification LBA range: start 0x2000 length 0x2000 00:22:23.476 raid5f : 5.02 6439.71 25.16 0.00 0.00 29881.41 309.06 25737.77 00:22:23.476 [2024-11-20T08:55:54.392Z] =================================================================================================================== 00:22:23.476 [2024-11-20T08:55:54.392Z] Total : 12867.41 50.26 0.00 0.00 29887.41 258.79 26929.34 00:22:24.854 00:22:24.854 real 0m7.258s 00:22:24.854 user 0m13.345s 00:22:24.854 sys 0m0.293s 00:22:24.854 08:55:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.854 ************************************ 00:22:24.854 END TEST bdev_verify 00:22:24.854 ************************************ 00:22:24.854 08:55:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:24.854 08:55:55 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:24.854 08:55:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:24.854 08:55:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.854 08:55:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:24.854 ************************************ 00:22:24.854 START TEST bdev_verify_big_io 00:22:24.854 ************************************ 00:22:24.854 08:55:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:24.854 [2024-11-20 08:55:55.607324] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:24.854 [2024-11-20 08:55:55.607530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91081 ] 00:22:25.113 [2024-11-20 08:55:55.793179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:25.113 [2024-11-20 08:55:55.922501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.113 [2024-11-20 08:55:55.922514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.727 Running I/O for 5 seconds... 00:22:27.601 506.00 IOPS, 31.62 MiB/s [2024-11-20T08:55:59.952Z] 634.00 IOPS, 39.62 MiB/s [2024-11-20T08:56:00.889Z] 676.67 IOPS, 42.29 MiB/s [2024-11-20T08:56:01.827Z] 697.50 IOPS, 43.59 MiB/s [2024-11-20T08:56:02.087Z] 710.40 IOPS, 44.40 MiB/s 00:22:31.171 Latency(us) 00:22:31.171 [2024-11-20T08:56:02.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.171 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:31.171 Verification LBA range: start 0x0 length 0x200 00:22:31.171 raid5f : 5.37 354.41 22.15 0.00 0.00 8907340.57 202.94 383206.87 00:22:31.171 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:31.171 Verification LBA range: start 0x200 length 0x200 00:22:31.171 raid5f : 5.35 355.92 22.25 0.00 0.00 8827363.68 219.69 383206.87 00:22:31.171 [2024-11-20T08:56:02.087Z] =================================================================================================================== 00:22:31.171 [2024-11-20T08:56:02.087Z] Total : 710.33 44.40 0.00 0.00 8867352.12 202.94 383206.87 00:22:32.550 00:22:32.550 real 0m7.712s 00:22:32.550 user 0m14.171s 00:22:32.550 sys 0m0.331s 00:22:32.550 08:56:03 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.550 08:56:03 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:32.550 ************************************ 00:22:32.550 END TEST bdev_verify_big_io 00:22:32.550 ************************************ 00:22:32.550 08:56:03 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:32.550 08:56:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:32.550 08:56:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.550 08:56:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.550 ************************************ 00:22:32.550 START TEST bdev_write_zeroes 00:22:32.550 ************************************ 00:22:32.550 08:56:03 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:32.550 [2024-11-20 08:56:03.372615] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:32.550 [2024-11-20 08:56:03.372827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91180 ] 00:22:32.809 [2024-11-20 08:56:03.567107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.809 [2024-11-20 08:56:03.721644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.378 Running I/O for 1 seconds... 00:22:34.756 20367.00 IOPS, 79.56 MiB/s 00:22:34.756 Latency(us) 00:22:34.756 [2024-11-20T08:56:05.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.756 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:34.756 raid5f : 1.01 20329.34 79.41 0.00 0.00 6269.92 1980.97 10366.60 00:22:34.756 [2024-11-20T08:56:05.672Z] =================================================================================================================== 00:22:34.756 [2024-11-20T08:56:05.672Z] Total : 20329.34 79.41 0.00 0.00 6269.92 1980.97 10366.60 00:22:35.693 00:22:35.693 real 0m3.266s 00:22:35.693 user 0m2.821s 00:22:35.693 sys 0m0.305s 00:22:35.693 08:56:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.693 ************************************ 00:22:35.693 END TEST bdev_write_zeroes 00:22:35.693 ************************************ 00:22:35.693 08:56:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:35.693 08:56:06 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:35.694 08:56:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:35.694 08:56:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.694 08:56:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:35.694 ************************************ 00:22:35.694 START TEST bdev_json_nonenclosed 00:22:35.694 ************************************ 00:22:35.694 08:56:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:35.953 [2024-11-20 08:56:06.677042] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:35.953 [2024-11-20 08:56:06.677219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91233 ] 00:22:35.953 [2024-11-20 08:56:06.847307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.211 [2024-11-20 08:56:06.972793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.211 [2024-11-20 08:56:06.972914] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:36.211 [2024-11-20 08:56:06.972951] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:36.211 [2024-11-20 08:56:06.972964] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:36.470 00:22:36.470 real 0m0.644s 00:22:36.470 user 0m0.407s 00:22:36.470 sys 0m0.131s 00:22:36.470 08:56:07 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.470 ************************************ 00:22:36.470 08:56:07 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:36.470 END TEST bdev_json_nonenclosed 00:22:36.470 ************************************ 00:22:36.470 08:56:07 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:36.470 08:56:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:36.470 08:56:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.470 08:56:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:36.470 ************************************ 00:22:36.470 START TEST bdev_json_nonarray 00:22:36.470 ************************************ 00:22:36.470 08:56:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:36.774 [2024-11-20 08:56:07.395568] Starting SPDK v25.01-pre git sha1 6fc96a60f / DPDK 24.03.0 initialization... 00:22:36.774 [2024-11-20 08:56:07.395755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91253 ] 00:22:36.774 [2024-11-20 08:56:07.586343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.053 [2024-11-20 08:56:07.716610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.053 [2024-11-20 08:56:07.716736] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:37.053 [2024-11-20 08:56:07.716764] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:37.053 [2024-11-20 08:56:07.716789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:37.053 00:22:37.053 real 0m0.678s 00:22:37.053 user 0m0.431s 00:22:37.053 sys 0m0.140s 00:22:37.053 08:56:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.053 08:56:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:37.053 ************************************ 00:22:37.053 END TEST bdev_json_nonarray 00:22:37.053 ************************************ 00:22:37.313 08:56:07 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:22:37.313 08:56:07 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:22:37.313 08:56:07 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:37.313 08:56:08 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:37.313 00:22:37.313 real 0m48.812s 00:22:37.313 user 1m6.751s 00:22:37.313 sys 0m5.218s 00:22:37.313 08:56:08 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.313 08:56:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:37.313 ************************************ 00:22:37.313 END TEST blockdev_raid5f 00:22:37.313 ************************************ 00:22:37.313 08:56:08 -- spdk/autotest.sh@194 -- # uname -s 00:22:37.313 08:56:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:37.313 08:56:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.313 08:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.313 08:56:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:37.313 08:56:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:37.313 08:56:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:37.313 08:56:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:37.313 08:56:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.313 08:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.313 08:56:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:37.313 08:56:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:37.313 08:56:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:37.313 08:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:39.217 INFO: APP EXITING 00:22:39.217 INFO: killing all VMs 00:22:39.217 INFO: killing vhost app 00:22:39.217 INFO: EXIT DONE 00:22:39.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:39.217 Waiting for block devices as requested 00:22:39.217 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:39.217 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:40.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:40.154 Cleaning 00:22:40.154 Removing: /var/run/dpdk/spdk0/config 00:22:40.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:40.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:40.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:40.155 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:40.155 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:40.155 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:40.155 Removing: /dev/shm/spdk_tgt_trace.pid56814 00:22:40.155 Removing: /var/run/dpdk/spdk0 00:22:40.155 Removing: /var/run/dpdk/spdk_pid56573 00:22:40.155 Removing: /var/run/dpdk/spdk_pid56814 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57043 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57147 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57203 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57336 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57360 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57570 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57676 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57794 00:22:40.155 Removing: /var/run/dpdk/spdk_pid57916 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58024 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58058 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58100 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58171 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58282 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58757 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58834 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58908 00:22:40.155 Removing: /var/run/dpdk/spdk_pid58924 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59077 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59099 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59247 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59268 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59338 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59356 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59420 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59443 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59644 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59675 00:22:40.155 Removing: /var/run/dpdk/spdk_pid59764 00:22:40.155 Removing: /var/run/dpdk/spdk_pid61148 00:22:40.155 Removing: /var/run/dpdk/spdk_pid61360 00:22:40.155 Removing: /var/run/dpdk/spdk_pid61505 00:22:40.155 Removing: /var/run/dpdk/spdk_pid62159 00:22:40.414 Removing: /var/run/dpdk/spdk_pid62371 00:22:40.414 Removing: /var/run/dpdk/spdk_pid62522 00:22:40.414 Removing: /var/run/dpdk/spdk_pid63171 00:22:40.414 Removing: /var/run/dpdk/spdk_pid63512 00:22:40.414 Removing: /var/run/dpdk/spdk_pid63652 00:22:40.414 Removing: /var/run/dpdk/spdk_pid65065 00:22:40.414 Removing: /var/run/dpdk/spdk_pid65329 00:22:40.414 Removing: /var/run/dpdk/spdk_pid65469 00:22:40.414 Removing: /var/run/dpdk/spdk_pid66883 00:22:40.414 Removing: /var/run/dpdk/spdk_pid67146 00:22:40.414 Removing: /var/run/dpdk/spdk_pid67292 00:22:40.414 Removing: /var/run/dpdk/spdk_pid68706 00:22:40.414 Removing: /var/run/dpdk/spdk_pid69156 00:22:40.414 Removing: /var/run/dpdk/spdk_pid69303 00:22:40.414 Removing: /var/run/dpdk/spdk_pid70810 00:22:40.414 Removing: /var/run/dpdk/spdk_pid71082 00:22:40.414 Removing: /var/run/dpdk/spdk_pid71229 00:22:40.414 Removing: /var/run/dpdk/spdk_pid72748 00:22:40.414 Removing: /var/run/dpdk/spdk_pid73024 00:22:40.414 Removing: /var/run/dpdk/spdk_pid73171 00:22:40.414 Removing: /var/run/dpdk/spdk_pid74687 00:22:40.414 Removing: /var/run/dpdk/spdk_pid75187 00:22:40.414 Removing: /var/run/dpdk/spdk_pid75333 00:22:40.414 Removing: /var/run/dpdk/spdk_pid75478 00:22:40.414 Removing: /var/run/dpdk/spdk_pid75923 00:22:40.414 Removing: /var/run/dpdk/spdk_pid76696 00:22:40.414 Removing: /var/run/dpdk/spdk_pid77096 00:22:40.414 Removing: /var/run/dpdk/spdk_pid77794 00:22:40.414 Removing: /var/run/dpdk/spdk_pid78275 00:22:40.414 Removing: /var/run/dpdk/spdk_pid79068 00:22:40.414 Removing: /var/run/dpdk/spdk_pid79503 00:22:40.414 Removing: /var/run/dpdk/spdk_pid81503 00:22:40.414 Removing: /var/run/dpdk/spdk_pid81953 00:22:40.414 Removing: /var/run/dpdk/spdk_pid82401 00:22:40.414 Removing: /var/run/dpdk/spdk_pid84518 00:22:40.414 Removing: /var/run/dpdk/spdk_pid85005 00:22:40.414 Removing: /var/run/dpdk/spdk_pid85514 00:22:40.414 Removing: /var/run/dpdk/spdk_pid86589 00:22:40.414 Removing: /var/run/dpdk/spdk_pid86922 00:22:40.414 Removing: /var/run/dpdk/spdk_pid87879 00:22:40.414 Removing: /var/run/dpdk/spdk_pid88206 00:22:40.414 Removing: /var/run/dpdk/spdk_pid89162 00:22:40.414 Removing: /var/run/dpdk/spdk_pid89485 00:22:40.414 Removing: /var/run/dpdk/spdk_pid90173 00:22:40.414 Removing: /var/run/dpdk/spdk_pid90448 00:22:40.414 Removing: /var/run/dpdk/spdk_pid90514 00:22:40.414 Removing: /var/run/dpdk/spdk_pid90555 00:22:40.414 Removing: /var/run/dpdk/spdk_pid90807 00:22:40.414 Removing: /var/run/dpdk/spdk_pid90988 00:22:40.414 Removing: /var/run/dpdk/spdk_pid91081 00:22:40.414 Removing: /var/run/dpdk/spdk_pid91180 00:22:40.414 Removing: /var/run/dpdk/spdk_pid91233 00:22:40.414 Removing: /var/run/dpdk/spdk_pid91253 00:22:40.414 Clean 00:22:40.414 08:56:11 -- common/autotest_common.sh@1453 -- # return 0 00:22:40.414 08:56:11 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:40.414 08:56:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.414 08:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.414 08:56:11 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:40.414 08:56:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.414 08:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:40.673 08:56:11 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:40.673 08:56:11 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:40.673 08:56:11 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:40.673 08:56:11 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:40.673 08:56:11 -- spdk/autotest.sh@398 -- # hostname 00:22:40.673 08:56:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:40.932 geninfo: WARNING: invalid characters removed from testname! 00:23:07.508 08:56:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:10.042 08:56:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:12.577 08:56:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:15.139 08:56:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:18.426 08:56:48 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:20.969 08:56:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:23.507 08:56:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:23.507 08:56:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:23.507 08:56:53 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:23.507 08:56:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:23.507 08:56:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:23.507 08:56:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:23.507 + [[ -n 5203 ]] 00:23:23.507 + sudo kill 5203 00:23:23.517 [Pipeline] } 00:23:23.532 [Pipeline] // timeout 00:23:23.537 [Pipeline] } 00:23:23.551 [Pipeline] // stage 00:23:23.557 [Pipeline] } 00:23:23.570 [Pipeline] // catchError 00:23:23.579 [Pipeline] stage 00:23:23.581 [Pipeline] { (Stop VM) 00:23:23.594 [Pipeline] sh 00:23:23.875 + vagrant halt 00:23:27.194 ==> default: Halting domain... 00:23:31.391 [Pipeline] sh 00:23:31.700 + vagrant destroy -f 00:23:34.981 ==> default: Removing domain... 00:23:34.993 [Pipeline] sh 00:23:35.274 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:23:35.283 [Pipeline] } 00:23:35.298 [Pipeline] // stage 00:23:35.305 [Pipeline] } 00:23:35.324 [Pipeline] // dir 00:23:35.330 [Pipeline] } 00:23:35.345 [Pipeline] // wrap 00:23:35.351 [Pipeline] } 00:23:35.365 [Pipeline] // catchError 00:23:35.375 [Pipeline] stage 00:23:35.377 [Pipeline] { (Epilogue) 00:23:35.392 [Pipeline] sh 00:23:35.674 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:42.251 [Pipeline] catchError 00:23:42.252 [Pipeline] { 00:23:42.265 [Pipeline] sh 00:23:42.548 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:42.548 Artifacts sizes are good 00:23:42.557 [Pipeline] } 00:23:42.571 [Pipeline] // catchError 00:23:42.582 [Pipeline] archiveArtifacts 00:23:42.590 Archiving artifacts 00:23:42.694 [Pipeline] cleanWs 00:23:42.707 [WS-CLEANUP] Deleting project workspace... 00:23:42.707 [WS-CLEANUP] Deferred wipeout is used... 00:23:42.714 [WS-CLEANUP] done 00:23:42.716 [Pipeline] } 00:23:42.732 [Pipeline] // stage 00:23:42.738 [Pipeline] } 00:23:42.752 [Pipeline] // node 00:23:42.758 [Pipeline] End of Pipeline 00:23:42.798 Finished: SUCCESS